+
-It is possible to upload more vectors to a single table if Memory allows it (for example, 4XL plan and higher for OpenAI embeddings). But it will affect the performance of the queries: RPS will be lower, and latency will be higher. Scaling should be almost linear, but it is recommended to benchmark your workload to find the optimal number of vectors per table and per database instance.
+It is possible to upload more vectors to a single table if Memory allows it (for example, 4XL plan and higher for OpenAI embeddings). But it will affect the performance of the queries: QPS will be lower, and latency will be higher. Scaling should be almost linear, but it is recommended to benchmark your workload to find the optimal number of vectors per table and per database instance.
diff --git a/apps/docs/pages/guides/ai/engineering-for-scale.mdx b/apps/docs/pages/guides/ai/engineering-for-scale.mdx
index 871d0a5059c93..0c4fff2fb2674 100644
--- a/apps/docs/pages/guides/ai/engineering-for-scale.mdx
+++ b/apps/docs/pages/guides/ai/engineering-for-scale.mdx
@@ -53,7 +53,7 @@ const { data, error } = await supabase
## Enterprise workloads
-As you move into production, we recommend running splitting your collections into separate projects. This is because it allows your vector stores to scale independently of your production data. Vectors typically grow faster than operational data, and they have different resource requirements. Running them on separate databases removes the single-point-of-failure.
+As you move into production, we recommend splitting your collections into separate projects. This is because it allows your vector stores to scale independently of your production data. Vectors typically grow faster than operational data, and they have different resource requirements. Running them on separate databases removes the single-point-of-failure.
+
+
+
+
+## HNSW, understanding `ef_construction`, `ef_search`, and `m`
+
+Index build parameters:
+
+- `m` is the number of bi-directional links created for every new element during construction. Higher `m` is suitable for datasets with high dimensionality and/or high accuracy requirements. Reasonable values for `m` are between 2 and 100. Range 12-48 is a good starting point for most use cases (16 is the default value).
+
+- `ef_construction` is the size of the dynamic list for the nearest neighbors (used during the construction algorithm). Higher `ef_construction` will result in better index quality and higher accuracy, but it will also increase the time required to build the index. `ef_construction` has to be at least 2 \* `m` (64 is the default value). At some point, increasing `ef_construction` does not improve the quality of the index. You can measure accuracy when `ef_search`=`ef_construction`: if accuracy is lower than 0.9, then there is room for improvement.
+
+Search parameters:
+
+- `ef_search` is the size of the dynamic list for the nearest neighbors (used during the search). Increasing `ef_search` will result in better accuracy, but it will also increase the time required to execute a query (40 is the default value).
+
+
+
+
+
+
+## IVFFlat, understanding `probes` and `lists`
Indexes used for approximate vector similarity search in pgvector divides a dataset into partitions. The number of these partitions is defined by the `lists` constant. The `probes` controls how many lists are going to be searched during a query.
-The values of lists and probes directly affect precision and requests per second (RPS).
+The values of lists and probes directly affect accuracy and queries per second (QPS).
-- Higher `lists` means an index will be built slower, but you can achieve better RPS and precision.
-- Higher `probes` means that select queries will be slower, but you can achieve better precision.
-- `lists` and `probes` are not independent. Higher `lists` means that you will have to use higher `probes` to achieve the same precision.
+- Higher `lists` means an index will be built slower, but you can achieve better QPS and accuracy.
+- Higher `probes` means that select queries will be slower, but you can achieve better accuracy.
+- `lists` and `probes` are not independent. Higher `lists` means that you will have to use higher `probes` to achieve the same accuracy.
-You can find more examples of how `lists` and `probes` constants affect precision and RPS in [pgvector 0.4.0 performance](https://supabase.com/blog/pgvector-performance) blogpost.
+You can find more examples of how `lists` and `probes` constants affect accuracy and QPS in [pgvector 0.4.0 performance](https://supabase.com/blog/pgvector-performance) blogpost.
:@:/"
-
-# create vector store client
-vx = vecs.create_client(DB_CONNECTION)
-```
-
-### Create collection
-
-You can create a collection to store vectors specifying the collection's name and the number of dimensions in the vectors you intend to store.
-
-```python
-docs = vx.create_collection(name="docs", dimension=3)
-```
-
-If another collection exists with the same name,
-
-### Get an existing collection
-
-To access a previously created collection, use `get_collection` to retrieve it by name
-
-```python
-docs = vx.get_collection(name="docs")
-```
-
-### Upserting vectors
-
-`vecs` combines the concepts of "insert" and "update" into "upsert". Upserting records adds them to the collection if the `id` is not present, or updates the existing record if the `id` does exist.
-
-```python
-# add records to the collection
-docs.upsert(
- vectors=[
- (
- "vec0", # the vector's identifier
- [0.1, 0.2, 0.3], # the vector. list or np.array
- {"year": 1973} # associated metadata
- ),
- (
- "vec1",
- [0.7, 0.8, 0.9],
- {"year": 2012}
- )
- ]
-)
-```
-
-### Create an index
-
-Collections can be queried immediately after being created.
-However, for good performance, the collection should be indexed after records have been upserted.
-
-Indexes should be created **after** the collection has been populated with records. Building an index on an empty collection will significantly reduce recall. Once the index has been created you can still upsert new documents into the collection but you should rebuild the index if the size of the collection more than doubles.
-
-Only one index may exist per-collection. By default, creating an index will replace any existing index.
-
-To create an index:
-
-```python
-##
-# INSERT RECORDS HERE
-##
-
-# index the collection to be queried by cosine distance
-docs.create_index(measure=vecs.IndexMeasure.cosine_distance)
-```
-
-Available options for query `measure` are:
-
-- `vecs.IndexMeasure.cosine_distance`
-- `vecs.IndexMeasure.l2_distance`
-- `vecs.IndexMeasure.max_inner_product`
-
-which correspond to different methods for comparing query vectors to the vectors in the database.
-
-If you aren't sure which to use, stick with the default (cosine_distance) by omitting the parameter i.e.: `docs.create_index()`.
-
-
-
-The time required to create an index grows with the number of records and size of vectors. For a few thousand records expect sub-minute a response in under a minute. It may take a few minutes for larger collections.
-
-
-
-For an in-depth guide on vector indexes, see [Managing indexes](/docs/guides/ai/managing-indexes).
-
-### Query
-
-Be aware that indexes are essential for good performance. If you do not create an index, every query will return a warning that includes the `IndexMeasure` you should index.
-
-#### Basic
-
-The simplest form of search is to provide a query vector.
-
-```python
-docs.query(
- query_vector=[0.4,0.5,0.6], # required
- limit=5, # number of records to return
- filters={}, # metadata filters
- measure="cosine_distance", # distance measure to use
- include_value=False, # should distance measure values be returned?
- include_metadata=False, # should record metadata be returned?
-)
-```
-
-Which returns a list of vector record `ids`.
-
-#### Metadata Filtering
-
-The metadata that is associated with each record can also be filtered during a query.
-
-As an example, `{"year": {"$eq": 2005}}` filters a `year` metadata key to be equal to 2005
-
-In context:
-
-```python
-docs.query(
- query_vector=[0.4,0.5,0.6],
- filters={"year": {"$eq": 2012}}, # metadata filters
-)
-```
-
-For a complete reference, see the [metadata guide](https://supabase.github.io/vecs/concepts_metadata/).
-
-## Resources
-
-- Official Vecs Documentation: https://supabase.github.io/vecs/api
-- Source Code: https://github.com/supabase/vecs
-
-export const Page = ({ children }) =>
-
-export default Page
diff --git a/apps/docs/pages/guides/ai/python-clients.mdx b/apps/docs/pages/guides/ai/python-clients.mdx
new file mode 100644
index 0000000000000..6e6695d35923e
--- /dev/null
+++ b/apps/docs/pages/guides/ai/python-clients.mdx
@@ -0,0 +1,18 @@
+import Layout from '~/layouts/DefaultGuideLayout'
+
+export const meta = {
+ id: 'ai-python-clients',
+ title: 'Choosing a Client',
+ description: 'Learn how to manage vectors using Python',
+ sidebar_label: 'Choosing a Client',
+}
+
+As described in [Structured & Unstructured Embeddings](/docs/guides/ai/structured-unstructured), AI workloads come in many forms.
+
+For data science or ephemeral workloads, the [Supabase Vecs](https://supabase.github.io/vecs/) client gets you started quickly. All you need is a connection string and vecs handles setting up your database to store and query vectors with associated metadata.
+
+For production python applications with version controlled migrations, we recommend adding first class vector support to your toolchain by [registering the vector type with your ORM](https://github.com/pgvector/pgvector-python). pgvector provides bindings for the most commonly used SQL drivers/libraries including Django, SQLAlchemy, SQLModel, psycopg, asyncpg and Peewee.
+
+export const Page = ({ children }) =>
+
+export default Page
diff --git a/apps/docs/pages/guides/ai/vecs-python-client.mdx b/apps/docs/pages/guides/ai/vecs-python-client.mdx
index bdcecb77c433a..ce7e5a7517c6c 100644
--- a/apps/docs/pages/guides/ai/vecs-python-client.mdx
+++ b/apps/docs/pages/guides/ai/vecs-python-client.mdx
@@ -79,7 +79,7 @@ docs.query(
## Deep Dive
-For a more in-depth guide on `vecs` collections, see [Managing collections](/docs/guides/ai/managing-collections).
+For a more in-depth guide on `vecs` collections, see [API](/docs/guides/ai/python/api).
## Resources
diff --git a/apps/docs/pages/guides/ai/vector-columns.mdx b/apps/docs/pages/guides/ai/vector-columns.mdx
index b263cb9b619de..8b4d29643a545 100644
--- a/apps/docs/pages/guides/ai/vector-columns.mdx
+++ b/apps/docs/pages/guides/ai/vector-columns.mdx
@@ -7,7 +7,7 @@ export const meta = {
sidebar_label: 'Vector columns',
}
-Supabase offers a number of different ways to store and query vectors within Postgres. If you prefer to use Python to store and query your vectors using collections, see [Managing collections](/docs/guides/ai/managing-collections). If you want more control over vectors within your own Postgres tables or would like to interact with them using a different language like JavaScript, keep reading.
+Supabase offers a number of different ways to store and query vectors within Postgres. The SQL included in this guide is applicable for clients in all programming languages. If you are a Python user see your [Python client options](/docs/guides/ai/python-clients) after reading the `Learn` section.
Vectors in Supabase are enabled via [pgvector](https://github.com/pgvector/pgvector/), a PostgreSQL extension for storing and querying vectors in Postgres. It can be used to store [embeddings](/docs/guides/ai/concepts#what-are-embeddings).
@@ -162,7 +162,7 @@ Vectors and embeddings can be used for much more than search. Learn more about e
### Indexes
-Once your vector table starts to grow, you will likely want to add an index to speed up queries. See [Managing indexes](/docs/guides/ai/managing-indexes) to learn how vector indexes work and how to create them.
+Once your vector table starts to grow, you will likely want to add an index to speed up queries. See [Vector indexes](/docs/guides/ai/vector-indexes) to learn how vector indexes work and how to create them.
export const Page = ({ children }) =>
diff --git a/apps/docs/pages/guides/ai/vector-indexes.mdx b/apps/docs/pages/guides/ai/vector-indexes.mdx
new file mode 100644
index 0000000000000..2bb3a4e310e2a
--- /dev/null
+++ b/apps/docs/pages/guides/ai/vector-indexes.mdx
@@ -0,0 +1,39 @@
+import Layout from '~/layouts/DefaultGuideLayout'
+
+export const meta = {
+ id: 'ai-vector-indexes',
+ title: 'Vector indexes',
+ description: 'Understanding vector indexes',
+ sidebar_label: 'Vector indexes',
+}
+
+Once your vector table starts to grow, you will likely want to add an index to speed up queries. Without indexes, you'll be performing a sequential scan which can be a resource-intensive operation when you have many records.
+
+## Choosing an index
+
+Today `pgvector` supports two types of indexes:
+
+- [HNSW](/docs/guides/ai/vector-indexes/hnsw-indexes)
+- [IVFFlat](/docs/guides/ai/vector-indexes/ivf-indexes)
+
+In general we recommend using [HNSW](/docs/guides/ai/vector-indexes/hnsw-indexes) because of its [performance](https://supabase.com/blog/increase-performance-pgvector-hnsw#hnsw-performance-1536-dimensions) and [robustness against changing data](/docs/guides/ai/vector-indexes/hnsw-indexes#when-should-you-create-hnsw-indexes).
+
+## Distance operators
+
+Indexes can be used to improve performance of nearest neighbor search using various distance measures. `pgvector` includes 3 distance operators:
+
+| Operator | Description | [**Operator class**](https://www.postgresql.org/docs/current/sql-createopclass.html) |
+| -------- | ---------------------- | ------------------------------------------------------------------------------------ |
+| `<->` | Euclidean distance | `vector_l2_ops` |
+| `<#>` | negative inner product | `vector_ip_ops` |
+| `<=>` | cosine distance | `vector_cosine_ops` |
+
+Currently vectors with up to 2,000 dimensions can be indexed.
+
+## Resources
+
+Read more about indexing on `pgvector`'s [GitHub page](https://github.com/pgvector/pgvector#indexing).
+
+export const Page = ({ children }) =>
+
+export default Page
diff --git a/apps/docs/pages/guides/ai/vector-indexes/hnsw-indexes.mdx b/apps/docs/pages/guides/ai/vector-indexes/hnsw-indexes.mdx
new file mode 100644
index 0000000000000..75fb6d336e8a8
--- /dev/null
+++ b/apps/docs/pages/guides/ai/vector-indexes/hnsw-indexes.mdx
@@ -0,0 +1,101 @@
+import Layout from '~/layouts/DefaultGuideLayout'
+
+export const meta = {
+ id: 'ai-hnsw-indexes',
+ title: 'HNSW indexes',
+ description: 'Understanding HNSW indexes in pgvector',
+ sidebar_label: 'HNSW indexes',
+}
+
+HNSW is an algorithm for approximate nearest neighbor search. It is a frequently used index type that can improve performance when querying highly-dimensional vectors, like those representing embeddings.
+
+## Usage
+
+The way you create an HNSW index depends on the distance operator you are using. `pgvector` includes 3 distance operators:
+
+| Operator | Description | [**Operator class**](https://www.postgresql.org/docs/current/sql-createopclass.html) |
+| -------- | ---------------------- | ------------------------------------------------------------------------------------ |
+| `<->` | Euclidean distance | `vector_l2_ops` |
+| `<#>` | negative inner product | `vector_ip_ops` |
+| `<=>` | cosine distance | `vector_cosine_ops` |
+
+Use the following SQL commands to create an HNSW index for the operator(s) used in your queries.
+
+### Euclidean L2 distance (`vector_l2_ops`)
+
+```sql
+create index on items using hnsw (column_name vector_l2_ops);
+```
+
+### Inner product (`vector_ip_ops`)
+
+```sql
+create index on items using hnsw (column_name vector_ip_ops);
+```
+
+### Cosine distance (`vector_cosine_ops`)
+
+```sql
+create index on items using hnsw (column_name vector_cosine_ops);
+```
+
+Currently vectors with up to 2,000 dimensions can be indexed.
+
+## How does HNSW work?
+
+HNSW uses proximity graphs (graphs connecting nodes based on distance between them) to approximate nearest-neighbor search. To understand HNSW, we can break it down into 2 parts:
+
+- **Hierarchical (H):** The algorithm operates over multiple layers
+- **Navigable Small World (NSW):** Each vector is a node within a graph and is connected to several other nodes
+
+### Hierarchical
+
+The hierarchical aspect of HNSW builds off of the idea of skip lists.
+
+Skip lists are multi-layer linked lists. The bottom layer is a regular linked list connecting an ordered sequence of elements. Each new layer above removes some elements from the underlying layer (based on a fixed probability), producing a sparser subsequence that “skips” over elements.
+
+
+
+
+
+
+When searching for an element, the algorithm begins at the top layer and traverses its linked list horizontally. If the target element is found, the algorithm stops and returns it. Otherwise if the next element in the list is greater than the target (or `NULL`), the algorithm drops down to the next layer below. Since each layer below is less sparse than the layer above (with the bottom layer connecting all elements), the target will eventually be found. Skip lists offer O(log n) average complexity for both search and insertion/deletion.
+
+### Navigable Small World
+
+A navigable small world (NSW) is a special type of proximity graph that also includes long-range connections between nodes. These long-range connections support the “small world” property of the graph, meaning almost every node can be reached from any other node within a few hops. Without these additional long-range connections, many hops would be required to reach a far-away node.
+
+
+
+The “navigable” part of NSW specifically refers to the ability to logarithmically scale the greedy search algorithm on the graph, an algorithm that attempts to make only the locally optimal choice at each hop. Without this property, the graph may still be considered a small world with short paths between far-away nodes, but the greedy algorithm tends to miss them. Greedy search is ideal for NSW because it is quick to navigate and has low computational costs.
+
+### **Hierarchical +** Navigable Small World
+
+HNSW combines these two concepts. From the hierarchical perspective, the bottom layer consists of a NSW made up of short links between nodes. Each layer above “skips” elements and creates longer links between nodes further away from each other.
+
+Just like skip lists, search starts at the top layer and works its way down until it finds the target element. However, instead of comparing a scalar value at each layer to determine whether or not to descend to the layer below, a multi-dimensional distance measure (such as Euclidean distance) is used.
+
+## When should you create HNSW indexes?
+
+HNSW should be your default choice when creating a vector index. Add the index when you don't need 100% accuracy and are willing to trade a small amount of accuracy for a lot of throughput.
+
+Unlike IVFFlat indexes, you are safe to build an HNSW index immediately after the table is created. HNSW indexes are based on graphs which inherently are not affected by the same limitations as IVFFlat. As new data is added to the table, the index will be filled automatically and the index structure will remain optimal.
+
+## Resources
+
+Read more about indexing on `pgvector`'s [GitHub page](https://github.com/pgvector/pgvector#indexing).
+
+export const Page = ({ children }) =>
+
+export default Page
diff --git a/apps/docs/pages/guides/ai/managing-indexes.mdx b/apps/docs/pages/guides/ai/vector-indexes/ivf-indexes.mdx
similarity index 60%
rename from apps/docs/pages/guides/ai/managing-indexes.mdx
rename to apps/docs/pages/guides/ai/vector-indexes/ivf-indexes.mdx
index 6269919badf40..46795777c6841 100644
--- a/apps/docs/pages/guides/ai/managing-indexes.mdx
+++ b/apps/docs/pages/guides/ai/vector-indexes/ivf-indexes.mdx
@@ -1,90 +1,97 @@
import Layout from '~/layouts/DefaultGuideLayout'
export const meta = {
- id: 'ai-managing-indexes',
- title: 'Managing indexes',
- description: 'Understanding vector indexes',
- sidebar_label: 'Managing indexes',
+ id: 'ai-ivf-indexes',
+ title: 'IVFFlat indexes',
+ description: 'Understanding IVFFlat indexes in pgvector',
+ sidebar_label: 'IVFFlat indexes',
}
-Once your vector table starts to grow, you will likely want to add an index to speed up queries. Without indexes, you'll be performing a sequential scan which can be a resource-intensive operation when you have many records.
+IVFFlat is a type of vector index for approximate nearest neighbor search. It is a frequently used index type that can improve performance when querying highly-dimensional vectors, like those representing embeddings.
-## IVFFlat indexes
+## Choosing an index
-Today `pgvector` indexes use an algorithm called IVFFlat. IVF stands for 'inverted file indexes'. It works by clustering your vectors in order to reduce the similarity search scope. Rather than comparing a vector to every other vector, the vector is only compared against vectors within the same cell cluster (or nearby clusters, depending on your configuration).
+Today `pgvector` supports two types of indexes:
-### Inverted lists (cell clusters)
+- [HNSW](/docs/guides/ai/vector-indexes/hnsw-indexes)
+- [IVFFlat](/docs/guides/ai/vector-indexes/ivf-indexes)
-When you create the index, you choose the number of inverted lists (cell clusters). Increase this number to speed up queries, but at the expense of recall.
+In general we recommend using [HNSW](/docs/guides/ai/vector-indexes/hnsw-indexes) because of its [performance](https://supabase.com/blog/increase-performance-pgvector-hnsw#hnsw-performance-1536-dimensions) and [robustness against changing data](/docs/guides/ai/vector-indexes/hnsw-indexes#when-should-you-create-hnsw-indexes). If you have a special use case that requires IVFFlat instead, keep reading.
-For example, to create an index with 100 lists on a column that uses the cosine operator:
+## Usage
-```sql
-create index on items using ivfflat (column_name vector_cosine_ops) with (lists = 100);
-```
+The way you create an IVFFlat index depends on the distance operator you are using. `pgvector` includes 3 distance operators:
-For more info on the different operators, see [Distance operations](#distance-operators).
+| Operator | Description | [**Operator class**](https://www.postgresql.org/docs/current/sql-createopclass.html) |
+| -------- | ---------------------- | ------------------------------------------------------------------------------------ |
+| `<->` | Euclidean distance | `vector_l2_ops` |
+| `<#>` | negative inner product | `vector_ip_ops` |
+| `<=>` | cosine distance | `vector_cosine_ops` |
-For every query, you can set the number of probes (1 by default). The number of probes corresponds to the number of nearby cells to probe for a match. Increase this for better recall at the expense of speed.
+Use the following SQL commands to create an IVFFlat index for the operator(s) used in your queries.
-To set the number of probes for the duration of the session run:
+### Euclidean L2 distance (`vector_l2_ops`)
```sql
-set ivfflat.probes = 10;
+create index on items using ivfflat (column_name vector_l2_ops) with (lists = 100);
```
-To set the number of probes only for the current transaction run:
+### Inner product (`vector_ip_ops`)
```sql
-begin;
-set local ivfflat.probes = 10;
-select ...
-commit;
+create index on items using ivfflat (column_name vector_ip_ops) with (lists = 100);
```
-If the number of probes is the same as the number of lists, exact nearest neighbor search will be performed and the planner won't use the index.
+### Cosine distance (`vector_cosine_ops`)
-### Approximate nearest neighbor
+```sql
+create index on items using ivfflat (column_name vector_cosine_ops) with (lists = 100);
+```
-One important note with IVF indexes is that nearest neighbor search is approximate, since exact search on high dimensional data can't be indexed efficiently. This means that similarity results will change (slightly) after you add an index (trading recall for speed).
+Currently vectors with up to 2,000 dimensions can be indexed.
-## Distance operators
+## How does IVFFlat work?
-The type of index required depends on the distance operator you are using. `pgvector` includes 3 distance operators:
+IVF stands for 'inverted file indexes'. It works by clustering your vectors in order to reduce the similarity search scope. Rather than comparing a vector to every other vector, the vector is only compared against vectors within the same cell cluster (or nearby clusters, depending on your configuration).
-| Operator | Description | [**Operator class**](https://www.postgresql.org/docs/current/sql-createopclass.html) |
-| -------- | ---------------------- | ------------------------------------------------------------------------------------ |
-| `<->` | Euclidean distance | `vector_l2_ops` |
-| `<#>` | negative inner product | `vector_ip_ops` |
-| `<=>` | cosine distance | `vector_cosine_ops` |
+### Inverted lists (cell clusters)
-Use the following SQL commands to create an index for the operator(s) used in your queries.
+When you create the index, you choose the number of inverted lists (cell clusters). Increase this number to speed up queries, but at the expense of recall.
-### Euclidean L2 distance (`vector_l2_ops`)
+For example, to create an index with 100 lists on a column that uses the cosine operator:
```sql
-create index on items using ivfflat (column_name vector_l2_ops) with (lists = 100);
+create index on items using ivfflat (column_name vector_cosine_ops) with (lists = 100);
```
-### Inner product (`vector_ip_ops`)
+For more info on the different operators, see [Distance operations](#distance-operators).
+
+For every query, you can set the number of probes (1 by default). The number of probes corresponds to the number of nearby cells to probe for a match. Increase this for better recall at the expense of speed.
+
+To set the number of probes for the duration of the session run:
```sql
-create index on items using ivfflat (column_name vector_ip_ops) with (lists = 100);
+set ivfflat.probes = 10;
```
-### Cosine distance (`vector_cosine_ops`)
+To set the number of probes only for the current transaction run:
```sql
-create index on items using ivfflat (column_name vector_cosine_ops) with (lists = 100);
+begin;
+set local ivfflat.probes = 10;
+select ...
+commit;
```
-Currently vectors with up to 2,000 dimensions can be indexed.
+If the number of probes is the same as the number of lists, exact nearest neighbor search will be performed and the planner won't use the index.
-If you are using the `vecs` Python library, follow the instructions in [Managing collections](/docs/guides/ai/managing-collections#create-an-index) to create indexes.
+### Approximate nearest neighbor
+
+One important note with IVF indexes is that nearest neighbor search is approximate, since exact search on high dimensional data can't be indexed efficiently. This means that similarity results will change (slightly) after you add an index (trading recall for speed).
-## When should you add indexes?
+## When should you create IVFFlat indexes?
-`pgvector` recommends adding indexes only after the table has sufficient data, so that the internal IVFFlat cell clusters are based on your data's distribution. Anytime the distribution changes significantly, consider recreating indexes.
+`pgvector` recommends building IVFFlat indexes only after the table has sufficient data, so that the internal IVFFlat cell clusters are based on your data's distribution. Anytime the distribution changes significantly, consider rebuilding indexes.
## Resources
diff --git a/apps/docs/pages/guides/auth.mdx b/apps/docs/pages/guides/auth.mdx
index b827086f831df..2a516ff35ea5c 100644
--- a/apps/docs/pages/guides/auth.mdx
+++ b/apps/docs/pages/guides/auth.mdx
@@ -6,12 +6,10 @@ export const meta = {
id: 'auth',
title: 'Auth',
description: 'Use Supabase to Authenticate and Authorize your users.',
- sidebar_label: 'Overview',
+ subtitle: 'Use Supabase to authenticate and authorize your users.',
video: 'https://www.youtube.com/v/6ow_jW4epf8',
}
-## Overview
-
There are two parts to every Auth system:
- **Authentication:** should this person be allowed in? If yes, who are they?
@@ -64,56 +62,13 @@ You can enable third-party providers with the click of a button by navigating to
### Redirect URLs and wildcards
-When using third-party providers, the [Supabase client library](/docs/reference/javascript/auth-signinwithoauth#sign-in-using-a-third-party-provider-with-redirect) redirects the user to the provider. When the third-party provider successfully authenticates the user, the provider redirects the user to the Supabase Auth callback URL where they are further redirected to the URL specified in the `redirectTo` parameter. This parameter defaults to the [`SITE_URL`](/docs/reference/auth/config#site_url). You can modify the `SITE_URL` or add additional [redirect URLs](https://supabase.com/dashboard/project/_/auth/url-configuration).
-
-You can use wildcard match patterns to support preview URLs from providers like Netlify and Vercel. See the [full list of supported patterns](https://pkg.go.dev/github.com/gobwas/glob#Compile). Use [this tool](https://www.digitalocean.com/community/tools/glob?comments=true&glob=http%3A%2F%2Flocalhost%3A3000%2F%2A%2A&matches=false&tests=http%3A%2F%2Flocalhost%3A3000&tests=http%3A%2F%2Flocalhost%3A3000%2F&tests=http%3A%2F%2Flocalhost%3A3000%2F%3Ftest%3Dtest&tests=http%3A%2F%2Flocalhost%3A3000%2Ftest-test%3Ftest%3Dtest&tests=http%3A%2F%2Flocalhost%3A3000%2Ftest%2Ftest%3Ftest%3Dtest) to test your patterns.
-
-
-
-While the "globstar" (`**`) is useful for local development and preview URLs, we recommend setting the exact redirect URL path for your site URL in production.
-
-
-
-#### Netlify preview URLs
-
-For deployments with Netlify, set the `SITE_URL` to your official site URL. Add the following additional redirect URLs for local development and deployment previews:
-
-- `http://localhost:3000/**`
-- `https://**--my_org.netlify.app/**`
+We've moved the guide for setting up redirect URLs [here](/docs/guides/auth/concepts/redirect-urls).
-#### Vercel preview URLs
-
-For deployments with Vercel, set the `SITE_URL` to your official site URL. Add the following additional redirect URLs for local development and deployment previews:
-
-- `http://localhost:3000/**`
-- `https://*-username.vercel.app/**`
-
-Vercel provides an environment variable for the URL of the deployment called `NEXT_PUBLIC_VERCEL_URL`. See the [Vercel docs](https://vercel.com/docs/concepts/projects/environment-variables#system-environment-variables) for more details. You can use this variable to dynamically redirect depending on the environment. You should also set the value of the environment variable called NEXT_PUBLIC_SITE_URL, this should be set to your site URL in production environment to ensure that redirects function correctly.
-
-```js
-const getURL = () => {
- let url =
- process?.env?.NEXT_PUBLIC_SITE_URL ?? // Set this to your site URL in production env.
- process?.env?.NEXT_PUBLIC_VERCEL_URL ?? // Automatically set by Vercel.
- 'http://localhost:3000/'
- // Make sure to include `https://` when not localhost.
- url = url.includes('http') ? url : `https://${url}`
- // Make sure to include a trailing `/`.
- url = url.charAt(url.length - 1) === '/' ? url : `${url}/`
- return url
-}
-
-const { data, error } = await supabase.auth.signInWithOAuth({
- provider: 'github',
- options: {
- redirectTo: getURL(),
- },
-})
-```
+#### [Netlify preview URLs](/docs/guides/auth/concepts/redirect-urls#netlify-preview-urls)
-#### Mobile deep linking URIs
+#### [Vercel preview URLs](/docs/guides/auth/concepts/redirect-urls#vercel-preview-urls)
-For mobile applications you can use deep linking URIs. For example for your `SITE_URL` you can specify something like `com.supabase://login-callback/` and for additional redirect URLs something like `com.supabase.staging://login-callback/` if needed.
+#### [Mobile deep linking URIs](/docs/guides/auth/concepts/redirect-urls#mobile-deep-linking-uris)
## Authorization
diff --git a/apps/docs/pages/guides/auth/auth-helpers/sveltekit.mdx b/apps/docs/pages/guides/auth/auth-helpers/sveltekit.mdx
index 10fdf26cd6e62..38fec39a043ad 100644
--- a/apps/docs/pages/guides/auth/auth-helpers/sveltekit.mdx
+++ b/apps/docs/pages/guides/auth/auth-helpers/sveltekit.mdx
@@ -1868,8 +1868,6 @@ export const GET: RequestHandler = withAuth(async ({ session, getSupabaseClient
- [Auth Helpers Source code](https://github.com/supabase/auth-helpers)
- [SvelteKit example](https://github.com/supabase/auth-helpers/tree/main/examples/sveltekit)
-- [SvelteKit Email/Password example](https://github.com/supabase/auth-helpers/tree/main/examples/sveltekit-email-password)
-- [SvelteKit Magiclink example](https://github.com/supabase/auth-helpers/tree/main/examples/sveltekit-magic-link)
export const Page = ({ children }) =>
diff --git a/apps/docs/pages/guides/auth/concepts/redirect-urls.mdx b/apps/docs/pages/guides/auth/concepts/redirect-urls.mdx
new file mode 100644
index 0000000000000..371df16ed153b
--- /dev/null
+++ b/apps/docs/pages/guides/auth/concepts/redirect-urls.mdx
@@ -0,0 +1,87 @@
+import Layout from '~/layouts/DefaultGuideLayout'
+
+export const meta = {
+ id: 'redirect-urls',
+ title: 'Redirect URLs',
+ description: 'Set up redirect urls with Supabase Auth.',
+ subtitle: 'Set up redirect urls with Supabase Auth.',
+}
+
+## Overview
+
+When using [passwordless sign-ins](/docs/reference/javascript/auth-signinwithotp) or [third-party providers](/docs/reference/javascript/auth-signinwithoauth#sign-in-using-a-third-party-provider-with-redirect), the Supabase client library methods provide a `redirectTo` parameter to specify where to redirect the user to after authentication. By default, the user will be redirected to the [`SITE_URL`](/docs/reference/auth/config#site_url) but you can modify the `SITE_URL` or add additional redirect URLs to the [allow list](https://supabase.com/dashboard/project/_/auth/url-configuration). Once you've added necessary URLs to the allow list, you can specify the URL you want the user to be redirected to in the `redirectTo` parameter.
+
+## Use wildcards in redirect URLs
+
+Supabase allows you to specify wildcards when adding redirect URLs to the [allow list](https://supabase.com/dashboard/project/_/auth/url-configuration). You can use wildcard match patterns to support preview URLs from providers like Netlify and Vercel.
+
+| Wildcard | Description |
+| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| `*` | matches any sequence of non-separator characters |
+| `**` | matches any sequence of characters |
+| `?` | matches any single non-separator character |
+| `c` | matches character c (c != `*`, `**`, `?`, `\`, `[`, `{`, `}`) |
+| `\c` | matches character c |
+| `[!{ character-range }]` | matches any sequence of characters not in the `{ character-range }`. For example, `[!a-z]` will not match any characters ranging from a-z. |
+
+The separator characters in a URL are defined as `.` and `/`. Use [this tool](https://www.digitalocean.com/community/tools/glob?comments=true&glob=http%3A%2F%2Flocalhost%3A3000%2F%2A%2A&matches=false&tests=http%3A%2F%2Flocalhost%3A3000&tests=http%3A%2F%2Flocalhost%3A3000%2F&tests=http%3A%2F%2Flocalhost%3A3000%2F%3Ftest%3Dtest&tests=http%3A%2F%2Flocalhost%3A3000%2Ftest-test%3Ftest%3Dtest&tests=http%3A%2F%2Flocalhost%3A3000%2Ftest%2Ftest%3Ftest%3Dtest) to test your patterns.
+
+
+
+While the "globstar" (`**`) is useful for local development and preview URLs, we recommend setting the exact redirect URL path for your site URL in production.
+
+
+
+### Redirect URL examples with wildcards
+
+| Redirect URL | Description |
+| ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `http://localhost:3000/*` | matches `http://localhost:3000/foo`, `http://localhost:3000/bar` but not `http://localhost:3000/foo/bar` or `http://localhost:3000/foo/` (note the trailing slash) |
+| `http://localhost:3000/**` | matches `http://localhost:3000/foo`, `http://localhost:3000/bar` and `http://localhost:3000/foo/bar` |
+| `http://localhost:3000/?` | matches `http://localhost:3000/a` but not `http://localhost:3000/foo` |
+| `http://localhost:3000/[!a-z]` | matches `http://localhost:3000/1` but not `http://localhost:3000/a` |
+
+## Netlify preview URLs
+
+For deployments with Netlify, set the `SITE_URL` to your official site URL. Add the following additional redirect URLs for local development and deployment previews:
+
+- `http://localhost:3000/**`
+- `https://**--my_org.netlify.app/**`
+
+## Vercel preview URLs
+
+For deployments with Vercel, set the `SITE_URL` to your official site URL. Add the following additional redirect URLs for local development and deployment previews:
+
+- `http://localhost:3000/**`
+- `https://*-username.vercel.app/**`
+
+Vercel provides an environment variable for the URL of the deployment called `NEXT_PUBLIC_VERCEL_URL`. See the [Vercel docs](https://vercel.com/docs/concepts/projects/environment-variables#system-environment-variables) for more details. You can use this variable to dynamically redirect depending on the environment. You should also set the value of the environment variable called NEXT_PUBLIC_SITE_URL, this should be set to your site URL in production environment to ensure that redirects function correctly.
+
+```js
+const getURL = () => {
+ let url =
+ process?.env?.NEXT_PUBLIC_SITE_URL ?? // Set this to your site URL in production env.
+ process?.env?.NEXT_PUBLIC_VERCEL_URL ?? // Automatically set by Vercel.
+ 'http://localhost:3000/'
+ // Make sure to include `https://` when not localhost.
+ url = url.includes('http') ? url : `https://${url}`
+ // Make sure to include a trailing `/`.
+ url = url.charAt(url.length - 1) === '/' ? url : `${url}/`
+ return url
+}
+
+const { data, error } = await supabase.auth.signInWithOAuth({
+ provider: 'github',
+ options: {
+ redirectTo: getURL(),
+ },
+})
+```
+
+## Mobile deep linking URIs
+
+For mobile applications you can use deep linking URIs. For example, for your `SITE_URL` you can specify something like `com.supabase://login-callback/` and for additional redirect URLs something like `com.supabase.staging://login-callback/` if needed.
+
+export const Page = ({ children }) =>
+
+export default Page
diff --git a/apps/docs/pages/guides/auth/phone-login/messagebird.mdx b/apps/docs/pages/guides/auth/phone-login/messagebird.mdx
index 273b48709442e..ad70cc72d526a 100644
--- a/apps/docs/pages/guides/auth/phone-login/messagebird.mdx
+++ b/apps/docs/pages/guides/auth/phone-login/messagebird.mdx
@@ -111,7 +111,7 @@ curl -X POST 'https://cvwawazfelidkloqmbma.supabase.co/auth/v1/signup' \
The user will now receive an SMS with a 6-digit pin that you will need to receive from them within 60-seconds before they can login to their account.
-You should present a form to the user so they can input the 6 digit pin, then send it along with the phone number to `verifyOTP`:
+You should present a form to the user so they can input the 6 digit pin, then send it along with the phone number to `verifyOtp`:
```js
-let { session, error } = await supabase.auth.verifyOTP({
+let { session, error } = await supabase.auth.verifyOtp({
phone: '+13334445555',
token: '123456',
})
@@ -237,7 +237,7 @@ The second step is the same as the previous section, you need to collect the 6-d
```js
-let { session, error } = await supabase.auth.verifyOTP({
+let { session, error } = await supabase.auth.verifyOtp({
phone: '+13334445555',
token: '123456',
})
diff --git a/apps/docs/pages/guides/auth/phone-login/twilio.mdx b/apps/docs/pages/guides/auth/phone-login/twilio.mdx
index 70dd119c1648a..7489c644267fe 100644
--- a/apps/docs/pages/guides/auth/phone-login/twilio.mdx
+++ b/apps/docs/pages/guides/auth/phone-login/twilio.mdx
@@ -157,7 +157,7 @@ curl -X POST 'https://cvwawazfelidkloqmbma.supabase.co/auth/v1/signup' \
The user will now receive an SMS with a 6-digit pin that you will need to receive from them within 60-seconds before they can login to their account.
-You should present a form to the user so they can input the 6 digit pin, then send it along with the phone number to `verifyOTP`:
+You should present a form to the user so they can input the 6 digit pin, then send it along with the phone number to `verifyOtp`:
```js
-let { session, error } = await supabase.auth.verifyOTP({
+let { session, error } = await supabase.auth.verifyOtp({
phone: '491512223334444',
token: '123456',
})
@@ -230,7 +230,7 @@ The second step is the same as the previous section, you need to collect the 6-d
```js
-let { session, error } = await supabase.auth.verifyOTP({
+let { session, error } = await supabase.auth.verifyOtp({
phone: '491512223334444',
token: '123456',
})
diff --git a/apps/docs/pages/guides/auth/server-side/email-based-auth-with-pkce-flow-for-ssr.mdx b/apps/docs/pages/guides/auth/server-side/email-based-auth-with-pkce-flow-for-ssr.mdx
index 81bba10258e42..9e41fe99ede2b 100644
--- a/apps/docs/pages/guides/auth/server-side/email-based-auth-with-pkce-flow-for-ssr.mdx
+++ b/apps/docs/pages/guides/auth/server-side/email-based-auth-with-pkce-flow-for-ssr.mdx
@@ -168,7 +168,7 @@ export const GET = async (event) => {
url,
locals: { supabase }
} = event;
- const token_hash = url.searchParams.get('token') as string;
+ const token_hash = url.searchParams.get('token_hash') as string;
const type = url.searchParams.get('type') as string;
const next = url.searchParams.get('next') ?? '/';
diff --git a/apps/docs/pages/guides/database/column-encryption.mdx b/apps/docs/pages/guides/database/column-encryption.mdx
index 1734d88a6af6f..0f6d7e599ee9e 100644
--- a/apps/docs/pages/guides/database/column-encryption.mdx
+++ b/apps/docs/pages/guides/database/column-encryption.mdx
@@ -11,7 +11,7 @@ export const meta = {
Supabase provides a secure method for encrypting data using [Vault](/docs/guides/database/vault), our Postgres secrets manager. Vault is a Postgres extension with an [integrated UI](https://app.supabase.com/project/_/settings/vault/secrets) intended to act as a secure global secrets management for your project.
-In addition to the Vault secret storage table, Supabase also enables an advanced feature called Transparent Column Encryption (TCE) which provides a safe way to encrypt columns in your own tables so that they doesn't leak into logs and backups. It can also provide row-level authenticated encryption.
+In addition to the Vault secret storage table, Supabase also enables an advanced feature called Transparent Column Encryption (TCE) which provides a safe way to encrypt columns in your own tables so that they don't leak into logs and backups. It can also provide row-level authenticated encryption.
Column Encryption comes with tradeoffs that need to be considered before using it.
diff --git a/apps/docs/pages/guides/database/extensions/index_advisor.mdx b/apps/docs/pages/guides/database/extensions/index_advisor.mdx
index d155c36a49bf3..988d2f315b250 100644
--- a/apps/docs/pages/guides/database/extensions/index_advisor.mdx
+++ b/apps/docs/pages/guides/database/extensions/index_advisor.mdx
@@ -31,7 +31,7 @@ Features:
## Installation
-index_advisor is a trusted language extension, which means it is directly installable by users from the [database.dev](database.dev) SQL package repository.
+index_advisor is a trusted language extension, which means it is directly installable by users from the [database.dev](https://database.dev/) SQL package repository.
To get started, enable the dbdev client by executing the [setup SQL script](https://database.dev/installer).
diff --git a/apps/docs/pages/guides/database/postgres/triggers.mdx b/apps/docs/pages/guides/database/postgres/triggers.mdx
index 58b8aca96f4aa..bd3a48106c21c 100644
--- a/apps/docs/pages/guides/database/postgres/triggers.mdx
+++ b/apps/docs/pages/guides/database/postgres/triggers.mdx
@@ -49,7 +49,7 @@ $$;
create trigger salary_update_trigger
after update on employees
for each row
-exectute function update_salary_log();
+execute function update_salary_log();
```
### Trigger variables
diff --git a/apps/docs/pages/guides/platform/migrating-and-upgrading-projects.mdx b/apps/docs/pages/guides/platform/migrating-and-upgrading-projects.mdx
index f425d5d260d3a..38272bbf5c8a1 100644
--- a/apps/docs/pages/guides/platform/migrating-and-upgrading-projects.mdx
+++ b/apps/docs/pages/guides/platform/migrating-and-upgrading-projects.mdx
@@ -132,6 +132,10 @@ const NEW_PROJECT_SERVICE_KEY = 'new-project-service-key-yyy'
})()
```
+### Transfer to a different organization
+
+Note that project migration is for transferring your projects to different regions. If you need to move your project to a different organization without touching the infrastrusture, see [project transfers](/docs/guides/platform/project-transfer).
+
export const Page = ({ children }) =>
export default Page
diff --git a/apps/docs/pages/guides/platform/org-based-billing.mdx b/apps/docs/pages/guides/platform/org-based-billing.mdx
index a63986576c4e7..2aff36341a606 100644
--- a/apps/docs/pages/guides/platform/org-based-billing.mdx
+++ b/apps/docs/pages/guides/platform/org-based-billing.mdx
@@ -3,7 +3,7 @@ import Layout from '~/layouts/DefaultGuideLayout'
export const meta = {
id: 'organization-based-billing',
title: 'How billing works',
- description: 'Learn how organzation-based billing works in Supabase.',
+ description: 'Learn how organization-based billing works in Supabase.',
subtitle: 'Learn how organzation-based billing works in Supabase.',
}
@@ -123,7 +123,7 @@ If you launch a second or third instance on your paid plan, we add the additiona
| 12XL | $3.836 | ~$2800 |
| 16XL | $5.12 | ~$3730 |
-With Legacy Billing, when you upgraded the [Compute Add-On](/docs/guides/platform/compute-add-ons), you were immediately charged the prorated amount (days left in your current billing cycle) and when your billing cycle reset you were charged upfront for the entire month. When you downgraded, you got the appropriate credits for unused time.
+With Legacy Billing, when you upgraded the [Compute Add-On](/docs/guides/platform/compute-add-ons), you were immediately charged the prorated amount (days remaining in your current billing cycle) and when your billing cycle reset you were charged upfront for the entire month. When you downgraded, you got the appropriate credits for unused time.
### Free plan
@@ -315,6 +315,15 @@ If you head over to your [organizations' billing settings](https://supabase.com/
infrastructure.
+
+
+ Where do I change my project add-ons such as PITR, Compute and Custom Domain?
+
+
+Head over to your project [Add-ons page](https://supabase.com/dashboard/project/_/settings/addons) to change your compute size, Point-In-Time-Recovery or custom domain.
+
+
+
I have additional questions/concerns, how do I get help?
diff --git a/apps/docs/pages/guides/self-hosting/docker.mdx b/apps/docs/pages/guides/self-hosting/docker.mdx
index 649bb5e2d839f..359343d08b1e4 100644
--- a/apps/docs/pages/guides/self-hosting/docker.mdx
+++ b/apps/docs/pages/guides/self-hosting/docker.mdx
@@ -47,12 +47,12 @@ You can access the Supabase Dashboard through the API gateway on port `8000`. Fo
You will be prompted for a username and password. By default, the credentials are:
-- Username: `supabase`
+- Username: `supabase`
- Password: `this_password_is_insecure_and_should_be_updated`
You should change these credentials as soon as possible using the [instructions](#dashboard-authentication) below.
-### Accessing the APIs
+### Accessing the APIs
Each of the APIs are available through the the same API gateway:
@@ -61,9 +61,9 @@ Each of the APIs are available through the the same API gateway:
- Storage: `http://:8000/storage/v1/`
- Realtime: `http://:8000/realtime/v1/`
-### Accessing your Edge Functions
+### Accessing your Edge Functions
-Edge Functions are stored in `volumes/functions`. The default setup has a `hello` Function that you can invoke on `http://:8000/functions/v1/hello`.
+Edge Functions are stored in `volumes/functions`. The default setup has a `hello` Function that you can invoke on `http://:8000/functions/v1/hello`.
You can add new Functions as `volumes/functions//index.ts`. Restart the `functions` service to pick up the changes: `docker compose restart functions --no-deps`
@@ -84,7 +84,7 @@ While we provided you with some example secrets for getting started, you should
### Generate API Keys
-Create a new `JWT_SECRET` and store it securely.
+Create a new `JWT_SECRET` and store it securely.
We can use your JWT Secret to generate new `anon` and `service` API keys using the form below. Update the "JWT Secret" and then run "Generate JWT" once for the `SERVICE_KEY` and once for the `ANON_KEY`:
@@ -114,8 +114,8 @@ You will need to [restart](#restarting-all-services) the services for the change
The dashboard is protected with Basic Authentication. The default user and password MUST be updated before using Supabase in production.
Update the following values in the `.env` file:
- - `DASHBOARD_USERNAME`: The default username for the Dashboard
- - `DASHBOARD_PASSWORD`: The default password for the Dashboard
+ - `DASHBOARD_USERNAME`: The default username for the Dashboard
+ - `DASHBOARD_PASSWORD`: The default password for the Dashboard
You can also add more credentials in `./docker/volumes/api/kong.yml`. For example:
@@ -149,13 +149,13 @@ You can stop Supabase by running `docker compose stop` in same directory as your
You can stop Supabase by running the following in same directory as your `docker-compose.yml` file:
-```sh
+```sh
# Stop docker and remove volumes:
docker compose down -v
# Remove Postgres data:
rm -rf volumes/db/data/
-```
+```
This will destroy all data in the database and storage volumes, so be careful!
@@ -163,7 +163,7 @@ This will destroy all data in the database and storage volumes, so be careful!
Each system can be [configured](../self-hosting#configuration) independently. Some of the most common configuration options are listed below.
-### Configuring an email server
+### Configuring an email server
You will need to use a production-ready SMTP server for sending emails. You can configure the SMTP server by updating the the following environment variables:
@@ -178,9 +178,9 @@ SMTP_SENDER_NAME=
We recommend using [AWS SES](https://aws.amazon.com/ses/). It's extremely cheap and reliable. Restart all services to pick up the new configuration.
-### Configuring S3 Storage
+### Configuring S3 Storage
-By default all files are stored locally on the server. You can connfigure the Storage service to use S3 by updating the following environment variables:
+By default all files are stored locally on the server. You can configure the Storage service to use S3 by updating the following environment variables:
```yaml docker-compose.yml
storage:
@@ -216,7 +216,29 @@ By default, Storage backend is set to `file`, which is to use local files as the
Additional configuration is required for self-hosting the Analytics server. For the full setup instructions, see [Self Hosting Analytics](https://supabase.com/docs/reference/self-hosting-analytics/introduction#getting-started).
+### Upgrading Analytics
+
+Due to the changes in the Analytics server, you will need to run the following commands to upgrade your Analytics server:
+
+
+
+- All data in analytics will be deleted when you run the commands below.
+
+
+
+
+```sh
+### Destroy analytics to transition to postgres self hosted solution without other data loss
+
+# Enter the container and use your .env POSTGRES_PASSWORD value to login
+docker exec -it $(docker ps | grep supabase-db | awk '{print $1}') psql -U supabase_admin --password
+# Drop all the data in the _analytics schema
+DROP PUBLICATION logflare_pub; DROP SCHEMA _analytics CASCADE; CREATE SCHEMA _analytics;\q
+# Drop the analytics container
+docker rm supabase-db
+```
export const Page = ({ children }) =>
export default Page
+
diff --git a/apps/docs/pages/learn/auth-deep-dive/auth-gotrue.mdx b/apps/docs/pages/learn/auth-deep-dive/auth-gotrue.mdx
index 85d22733009fc..ba702ac28eaf4 100644
--- a/apps/docs/pages/learn/auth-deep-dive/auth-gotrue.mdx
+++ b/apps/docs/pages/learn/auth-deep-dive/auth-gotrue.mdx
@@ -57,7 +57,7 @@ You'll have to make sure your google app is verified of course in order to reque
But all the functionality of gotrue-js is also available in supabase-js, which uses gotrue-js internally when you do things like:
```jsx
-const { user, session, error } = await supabase.auth.signIn({
+const { user, session, error } = await supabase.auth.signInWithPassword({
email: 'example@email.com',
password: 'example-password',
})
diff --git a/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--dark.png b/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--dark.png
new file mode 100644
index 0000000000000..e8ff92bfe376f
Binary files /dev/null and b/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--dark.png differ
diff --git a/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--light.png b/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--light.png
new file mode 100644
index 0000000000000..5a2172cacbde9
Binary files /dev/null and b/apps/docs/public/img/ai/going-prod/dbpedia-hnsw-build-parameters--light.png differ
diff --git a/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--dark.png b/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--dark.png
new file mode 100644
index 0000000000000..0c7a25b75bebf
Binary files /dev/null and b/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--dark.png differ
diff --git a/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--light.png b/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--light.png
new file mode 100644
index 0000000000000..b6cd088270276
Binary files /dev/null and b/apps/docs/public/img/ai/going-prod/dbpedia-ivfflat-vs-hnsw-4xl--light.png differ
diff --git a/apps/docs/public/img/ai/going-prod/size-to-rps--dark.png b/apps/docs/public/img/ai/going-prod/size-to-rps--dark.png
index 69ab4d76f5164..779722dbf9f48 100644
Binary files a/apps/docs/public/img/ai/going-prod/size-to-rps--dark.png and b/apps/docs/public/img/ai/going-prod/size-to-rps--dark.png differ
diff --git a/apps/docs/public/img/ai/going-prod/size-to-rps--light.png b/apps/docs/public/img/ai/going-prod/size-to-rps--light.png
index b4aaf58e257f6..992b5e730842b 100644
Binary files a/apps/docs/public/img/ai/going-prod/size-to-rps--light.png and b/apps/docs/public/img/ai/going-prod/size-to-rps--light.png differ
diff --git a/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/nsw.png b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/nsw.png
new file mode 100644
index 0000000000000..63387dd738e40
Binary files /dev/null and b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/nsw.png differ
diff --git a/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--dark.png b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--dark.png
new file mode 100644
index 0000000000000..856fa59c0ced4
Binary files /dev/null and b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--dark.png differ
diff --git a/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--light.png b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--light.png
new file mode 100644
index 0000000000000..a1f394137f3da
Binary files /dev/null and b/apps/docs/public/img/ai/vector-indexes/hnsw-indexes/skip-list--light.png differ
diff --git a/apps/docs/public/sitemap.xml b/apps/docs/public/sitemap.xml
index c8eaf5ff95eab..e25ed2cfa05f1 100644
--- a/apps/docs/public/sitemap.xml
+++ b/apps/docs/public/sitemap.xml
@@ -151,7 +151,7 @@
- https://supabase.com/docs/guides/ai/managing-indexes
+ https://supabase.com/docs/guides/ai/vector-indexesweekly0.5
diff --git a/apps/www/_blog/2023-02-03-openai-embeddings-postgres-vector.mdx b/apps/www/_blog/2023-02-03-openai-embeddings-postgres-vector.mdx
index 7e3d5c209c721..7184cd66f41f1 100644
--- a/apps/www/_blog/2023-02-03-openai-embeddings-postgres-vector.mdx
+++ b/apps/www/_blog/2023-02-03-openai-embeddings-postgres-vector.mdx
@@ -378,11 +378,13 @@ The OpenAI API supports [completion streaming](https://platform.openai.com/docs/
Storing embeddings in Postgres opens a world of possibilities. You can combine your search function with telemetry functions, add an user-provided feedback (thumbs up/down), and make your search feel more integrated with your products.
-The [pgvector extension](https://supabase.com/docs/guides/ai/vector-columns) is available on all new Supabase projects today. If you want to try it out, launch a new Postgres database today: [database.new](https://database.new)
+The [pgvector extension](https://supabase.com/docs/guides/ai/vector-columns) is available on all new Supabase projects today. To try it out, launch a new Postgres database: [database.new](https://database.new)
-## More pgvector and ChatGPT resources
+## More pgvector and AI resources
- [Supabase Clippy: ChatGPT for Supabase Docs](https://supabase.com/blog/chatgpt-supabase-docs)
-- [A ChatGPT Plugins Template built with Supabase Edge Runtime](https://supabase.com/blog/building-chatgpt-plugins-template)
-- [Template for building your own custom ChatGPT style doc search](https://github.com/supabase-community/nextjs-openai-doc-search)
-- [Supabase + LangChain Starter Template](https://blog.langchain.dev/langchain-x-supabase/)
+- [Hugging Face is now supported in Supabase](https://supabase.com/blog/hugging-face-supabase)
+- [How to build ChatGPT Plugin from scratch with Supabase Edge Runtime](https://supabase.com/blog/building-chatgpt-plugins-template)
+- [Docs pgvector: Embeddings and vector similarity](https://supabase.com/docs/guides/database/extensions/pgvector)
+- [Choosing Compute Add-on for AI workloads](https://supabase.com/docs/guides/ai/choosing-compute-addon)
+- [pgvector v0.5.0: Faster semantic search with HNSW indexes](https://supabase.com/blog/increase-performance-pgvector-hnsw)
diff --git a/apps/www/_blog/2023-02-07-chatgpt-supabase-docs.mdx b/apps/www/_blog/2023-02-07-chatgpt-supabase-docs.mdx
index 2a7a46d93fb36..fe8f61cf7df76 100644
--- a/apps/www/_blog/2023-02-07-chatgpt-supabase-docs.mdx
+++ b/apps/www/_blog/2023-02-07-chatgpt-supabase-docs.mdx
@@ -123,7 +123,7 @@ Want to try Supabase Clippy? It's a hidden feature while in MVP - visit [supabas
## More pgvector and ChatGPT resources
-- [pgvector docs]([https://supabase.com/blog/chatgpt-supabase-docs](https://supabase.com/docs/guides/getting-started/openai/vector-search)
+- [AI docs](https://supabase.com/docs/guides/ai)
- [A ChatGPT Plugins Template built with Supabase Edge Runtime](https://supabase.com/blog/building-chatgpt-plugins-template)
- [Template for building your own custom ChatGPT style doc search](https://github.com/supabase-community/nextjs-openai-doc-search)
- [Supabase + LangChain Starter Template](https://blog.langchain.dev/langchain-x-supabase/)
diff --git a/apps/www/_blog/2023-05-15-building-chatgpt-plugins-template.mdx b/apps/www/_blog/2023-05-15-building-chatgpt-plugins-template.mdx
index e9281978c452e..b9f1fde47f130 100644
--- a/apps/www/_blog/2023-05-15-building-chatgpt-plugins-template.mdx
+++ b/apps/www/_blog/2023-05-15-building-chatgpt-plugins-template.mdx
@@ -274,8 +274,7 @@ In a next step you can add authentication to your plugin, let us know on [Twitte
## More AI resources
+- [Hugging Face is now supported in Supabase](https://supabase.com/blog/hugging-face-supabase)
+- [pgvector v0.5.0: Faster semantic search with HNSW indexes](https://supabase.com/blog/pgvector-v0-5-0-hnsw)
- [OpenAI ChatGPT Plugin docs](https://platform.openai.com/docs/plugins/introduction)
-- [Advanced plugin template](https://github.com/openai/chatgpt-retrieval-plugin)
-- Learn to build your own ChatGPT-style docs search with [Deno Fresh](https://deno.com/blog/build-chatgpt-doc-search-with-supabase-fresh) or [Next.js](https://vercel.com/templates/next.js/nextjs-openai-doc-search-starter)
-- [How we builtChatGPT for the Supabase Docs](https://supabase.com/blog/chatgpt-supabase-docs).
- [Docs pgvector: Embeddings and vector similarity](https://supabase.com/docs/guides/database/extensions/pgvector)
diff --git a/apps/www/_blog/2023-07-13-pgvector-performance.mdx b/apps/www/_blog/2023-07-13-pgvector-performance.mdx
index ed40d524fa984..4a2df4293a059 100644
--- a/apps/www/_blog/2023-07-13-pgvector-performance.mdx
+++ b/apps/www/_blog/2023-07-13-pgvector-performance.mdx
@@ -1,6 +1,6 @@
---
title: 'pgvector 0.4.0 performance'
-description: There's been lot of talk about pgvector performance lately, so we took some datasets and pushed pgvector to the limits to find out its strengths and limitations.
+description: There's been a lot of talk about pgvector performance lately, so we took some datasets and pushed pgvector to the limits to find out its strengths and limitations.
categories:
- engineering
tags:
@@ -15,6 +15,13 @@ image: 2023-07-13-pgvector-performance/vector-benchmarks-og.jpeg
thumb: 2023-07-13-pgvector-performance/vector-benchmarks-thumb.jpeg
---
+
+
+🚀 The incorporation of the HNSW index in pgvector v0.5.0 ensures lightning-fast vector searches. We tested it, benchmarked it, and shared everything.
+[Read the new post](https://supabase.com/blog/increase-performance-pgvector-hnsw)
+
+
+
There are a few pgvector benchmarks floating around the internet, most recently a [pgvector vs Qdrant](https://nirantk.com/writing/pgvector-vs-qdrant/) comparison by NirantK. We wanted to reproduce (or improve!) the results.
There is an obvious bias here: we're a Postgres company. It's not our goal to prove that pgvector is better than Qdrant for running vector workloads. From everything we hear about Qdrant, it's fantastic.
@@ -23,7 +30,7 @@ Our goals in this article are:
1. To show the strengths and limitations of the _current version_ of pgvector.
2. Highlight some improvements that are coming to pgvector.
-3. Prove to you that it's completely viable for production workloads and give you some tips on using it at scale. We'll show you how to run 1 million Open AI embeddings at ~1800 requests per second with 91% precision, or 670 requests per second with 98% precision.
+3. Prove to you that it's completely viable for production workloads and give you some tips on using it at scale. We'll show you how to run 1 million Open AI embeddings at ~1800 queries per second with 91% accuracy, or 670 queries per second with 98% accuracy.
## Benchmark Methodology
@@ -32,7 +39,7 @@ We've used the [ANN Benchmarks](https://github.com/erikbern/ann-benchmarks) meth
The key elements are:
- **Helper scripts:** a Python test runner which is responsible for data upload, index creation, and query execution. This uses [qdrant's vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) repo. The “engine” in this repo uses [Vecs](https://github.com/supabase/vecs), a Python client for pgvector.
-- **Runtime:** Each test runs for at least 30-40 minutes and included a series of experiments executed at various concurrency levels. This process allowed us to gauge the engine's performance under different load types. Subsequently, we averaged the results.
+- **Runtime:** Each test runs for at least 30-40 minutes and includes a series of experiments executed at various concurrency levels. This process allowed us to gauge the engine's performance under different load types. Subsequently, we averaged the results.
- **Pre-warming RAM:** We executed 10,000 to 50,000 “warm-up” queries before each benchmark, matching the number of probes as the benchmark. Additionally, we executed about 1,000 queries with probes ranging from three to ten times the benchmark's probes. Both of these help with RAM utilization.
@@ -87,7 +94,7 @@ Let's start with NirantK's results as a baseline:
/>
-They aren't very flattering! Repeating our statements above, these benchmarks are using the defaults for both engines. Our goal now is to replicate the results, and then see what improvements need to be made as developers scale up their workload.
+They aren't very flattering! Repeating our statements above, these benchmarks use the defaults for both engines. Our goal now is to replicate the results, and then see what improvements need to be made as developers scale up their workload.
## Results
@@ -111,8 +118,8 @@ The resulting figures were significantly different after these changes.
With the changes above and probes set to 10, pgvector was faster and more accurate:
-- precision@10 of 0.91
-- RPS (requests per second) of 380
+- accuracy@10 of 0.91
+- QPS (queries per second) of 380
@@ -170,13 +177,13 @@ The Qdrant benchmark uses “default” configuration and is in not indicative o
/>
-Although more compute is required to match Qdrant's precision and RPS levels concurrently, this is still a satisfying outcome. It means that it's not a _necessity_ to use another vector database. You can put everything in Postgres to lower your operational complexity.
+Although more compute is required to match Qdrant's accuracy and QPS levels concurrently, this is still a satisfying outcome. It means that it's not a _necessity_ to use another vector database. You can put everything in Postgres to lower your operational complexity.
### Final results: pgvector performance
Putting it all together, we find that we can predictably scale our database to match the performance we need.
-With a 64-core, 256 GB server we achieve ~1800 RPS and 0.91 precision. This is for pgvector 0.4.0, and we've heard that the latest version (0.4.4) already has significant improvements. We'll release those benchmarks as soon as we have them.
+With a 64-core, 256 GB server we achieve ~1800 QPS and 0.91 accuracy. This is for pgvector 0.4.0, and we've heard that the latest version (0.4.4) already has significant improvements. We'll release those benchmarks as soon as we have them.
-Dot product is the product of each vector element pair summed together into a single result. Fewer dimensions in the vector means fewer calculations for every computed distance .
+Dot product is the product of each vector element pair summed together into a single result. Fewer dimensions in the vector means fewer calculations for every computed distance.
-We compared the performance of `text-embedding-ada-002` from OpenAI (1536 dimensions) with open-source `all-MiniLM-L6-v2` (384 dimensions) by measuring requests per second at a constant precision and configuration:
+We compared the performance of `text-embedding-ada-002` from OpenAI (1536 dimensions) with open-source `all-MiniLM-L6-v2` (384 dimensions) by measuring queries per second at a constant accuracy and configuration:
- **Database size:** with 2vCPU (ARM) and 8GB RAM - `large` add-on for Supabase project.
- **Version:** Postgres v15 and pgvector v0.4.0
@@ -95,7 +95,7 @@ We compared the performance of `text-embedding-ada-002` from OpenAI (1536 dimens
- **Index:** The index was generated for `inner-product` (dot product) distance function with `lists=1000`.
- **Process:** We followed [our optimization guide and tips](https://supabase.com/docs/guides/ai/going-to-prod#performance-tips-when-using-indexes).
-We observed pgvector with `all-MiniLM-L6-v2` outperforming `text-embedding-ada-002` by 78% when holding the precision@10 constant at 0.99. This gap increases as you lower the precision. Postgres was using just 4GB of RAM with 384d vectors generated by `all-MiniLM-L6-v2` compared to 7.5GB with `text-embedding-ada-002`.
+We observed pgvector with `all-MiniLM-L6-v2` outperforming `text-embedding-ada-002` by 78% when holding the accuracy@10 constant at 0.99. This gap increases as you lower the accuracy. Postgres was using just 4GB of RAM with 384d vectors generated by `all-MiniLM-L6-v2` compared to 7.5GB with `text-embedding-ada-002`.
-After that, we decided to try the recently published [gte-small](https://huggingface.co/thenlper/gte-small) (also 384 dimensions), and the results were even more astonishing. With `gte-small`, we could set `probes=10` to achieve the same level of `precision@10 = 0.99`. Consequently, we observed more than a 200% improvement in requests per second for pgvector with embeddings generated by `gte-small` compared to `all-MiniLM-L6-v2`.
+After that, we decided to try the recently published [gte-small](https://huggingface.co/thenlper/gte-small) (also 384 dimensions), and the results were even more astonishing. With `gte-small`, we could set `probes=10` to achieve the same level of `accuracy@10 = 0.99`. Consequently, we observed more than a 200% improvement in queries per second for pgvector with embeddings generated by `gte-small` compared to `all-MiniLM-L6-v2`.
+
+💡 If you have an existing database that was created before this release, please reach out to [support](https://supabase.com/dashboard/support/new) and we'll assist you in an upgrade to pgvector v0.5.0.
+
+
+
+## How does HNSW work?
+
+Compared to inverted file (IVF) indexes which use [clusters](https://supabase.com/docs/guides/ai/vector-indexes/ivf-indexes#how-does-ivfflat-work) to approximate nearest-neighbor search, HNSW uses proximity graphs (graphs connecting nodes based on distance between them). To understand HNSW, we can break it down into 2 parts:
+
+- **Hierarchical (H):** The algorithm operates over multiple layers
+- **Navigable Small World (NSW):** Each vector is a node within a graph and is connected to several other nodes
+
+### Hierarchical
+
+The hierarchical aspect of HNSW builds off of the idea of skip lists.
+
+Skip lists are multi-layer linked lists. The bottom layer is a regular linked list connecting an ordered sequence of elements. Each new layer above removes some elements from the underlying layer (based on a fixed probability), producing a sparser subsequence that “skips” over elements.
+
+
+
+
+
+
+When searching for an element, the algorithm begins at the top layer and traverses its linked list horizontally. If the target element is found, the algorithm stops and returns it. Otherwise if the next element in the list is greater than the target (or NIL), the algorithm drops down to the next layer below. Since each layer below is less sparse than the layer above (with the bottom layer connecting all elements), the target will eventually be found. Skip lists offer O(log n) average complexity for both search and insertion/deletion.
+
+### Navigable Small World
+
+A navigable small world (NSW) is a special type of proximity graph that also includes long-range connections between nodes. These long-range connections support the “small world” property of the graph, meaning almost every node can be reached from any other node within a few hops. Without these additional long-range connections, many hops would be required to reach a far-away node.
+
+
+
+The “navigable” part of NSW specifically refers to the ability to logarithmically scale the greedy search algorithm on the graph, an algorithm that attempts to make only the locally optimal choice at each hop. Without this property, the graph may still be considered a small world with short paths between far-away nodes, but the greedy algorithm tends to miss them. Greedy search is ideal for NSW because it is quick to navigate and has low computational costs.
+
+### **Hierarchical +** Navigable Small World
+
+HNSW combines these two concepts. From the hierarchical perspective, the bottom layer consists of a NSW made up of short links between nodes. Each layer above “skips” elements and creates longer links between nodes further away from each other.
+
+Just like skip lists, search starts at the top layer and works its way down until it finds the target element. However instead of comparing a scalar value at each layer to determine whether or not to descend to the layer below, a multi-dimensional distance measure (such as Euclidean distance) is used.
+
+## HNSW performance: 1536 dimensions
+
+To understand the performance improvements that HNSW offers, we decided to expand upon our previous benchmarks and include results for the HNSW index in addition to IVF and compare the queries per second (QPS) for both.
+
+### wikipedia-en-embeddings
+
+In the first test, we used [224,482 vectors by OpenAI](https://huggingface.co/datasets/Supabase/wikipedia-en-embeddings) (1536 dimensions). You can find our previous benchmark with additional information on how vector dimensions may affect performance in [pgvector: Fewer dimensions are better](https://supabase.com/blog/pgvector-performance).
+
+In this test, we used a Supabase project with a large compute add-on (2-core CPU and 8GB RAM) and built the HNSW index with the following parameters:
+
+- `m`: 32
+- `ef_construction`: 64
+
+
+
+
+
+
+HNSW has 3 times better performance than IVFFlat and with better accuracy.
+
+### dbpedia-entities-openai-1M
+
+Next, we took the setup from our benchmarks with [1,000,000 vectors by OpenAI](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) (1536 dimensions). If you want to find out more about pgvector 0.4.0 IVFFlat performance and our load testing methodology, check out [pgvector 0.4.0 performance](https://supabase.com/blog/pgvector-performance) blogpost.
+
+Here we used the Supabase project with a 2XL compute add-on (8-core CPU and 32GB RAM) and built the HNSW index with the same parameters:
+
+- `m`: 24
+- `ef_construction`: 56
+
+
+
+
+
+
+When we maintain fixed HNSW build parameters, we can adjust the select query parameter `ef_search` to balance query speed and accuracy. To achieve accuracy@10 of 0.98, we increased it from the default 40 to 100. For accuracy@10 of 0.99, we further raised it to 250. Remarkably, HNSW demonstrated over six times better performance while maintaining the same level of accuracy. With higher accuracy@10 of 0.99 HNSW even outperforms [qdrant](https://nirantk.com/writing/pgvector-vs-qdrant/) on equivalent compute resources.
+
+
+
+
+
+
+### Scaling the database
+
+Performance scales predictably with the size of the database for the HNSW index, just as [it does for the IVFFlat index](https://supabase.com/blog/pgvector-performance#scaling-the-database). The difference in performance between IVFFlat and HNSW remains consistent after a compute upgrade.
+
+
+
+
+
+
+Switching to a 4XL compute add-on (with a 16-core CPU and 64GB of RAM) resulted in a 69% increase in QPS for an accuracy@10 of 0.99 compared to the 2XL instance.
+
+
+
+
+
+
+Furthermore, having 64GB of RAM is better suited for this dataset because Postgres uses approximately 30-35GB of RAM to achieve optimal performance. With a 4XL setup, you not only have the capacity to store 1,000,000 vectors but also the flexibility to accommodate additional data or increase the number of vectors as needed.
+
+### HNSW build parameters
+
+We conducted a small experiment to assess the impact of HNSW index parameters on accuracy and QPS. In this experiment, we utilized the same 4XL instance as in the previous test but modified the building parameters:
+
+- `m`: 32
+- `ef_construction`: 80
+
+This adjustment enabled us to achieve the same level of accuracy@10 of 0.99 with an `ef_search` = 100 instead of the previous 250, resulting in a 35% increase in QPS.
+
+
+
+
+
+
+## Other HNSW features
+
+In addition to the above query performance improvements, HNSW offers another advantage: you don't need to fully fill your table before building the index.
+
+With IVFFlat indexes, the clusters (lists) are constructed based on the distribution of existing data in the table. This means that IVF indexes built on an empty table would produce completely suboptimal centers. This is why pgvector recommends building IVF indexes only once sufficient data exists in the table and rebuilding them any time the distribution of data changes significantly.
+
+HNSW indexes use graphs which inherently are not affected by this limitation, so you are safe to create your HNSW index immediately after the table is created. As new data is added to the table, the index will be filled automatically and the index structure will remain optimal.
+
+## Improvements to IVF indexes
+
+IVFFlat indexes also saw some improvements in v0.5.0. Index build times are now significantly faster for large datasets thanks to 2 new improvements:
+
+- Parallelization during the assignment build step (assigning records to lists)
+- Switching from [double to float](https://github.com/pgvector/pgvector/pull/180) accuracy for select distance calculations which unlocks the [fused multiply-add](https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation#Fused_multiply%E2%80%93add) instruction on CPUs
+
+Below we compare the index build times between v0.4.0 and v0.5.0 for the inner product distance measure over 1,000,000 vectors on a Supabase project with 4XL compute add-on (with a 16-core CPU and 64GB of RAM).
+
+- `lists = 1000`
+
+
+
+
+
+
+- `lists = 5000`
+
+
+
+
+
+
+The index build time has decreased by over 50%, and this ratio remains consistent when you adjust the index build parameters ([specifically, the `lists` value](https://supabase.com/blog/pgvector-performance#other-performance-factors)).
+
+## When should you use HNSW vs IVF?
+
+In most cases today, HNSW offers a more performant and robust index over IVFFlat. It's worth noting though that HNSW indexes will almost always be slower to build and use more memory than IVFFlat, so if your system is memory-constrained and you don't foresee the need to rebuild your index often, you may find IVFFlat to be more suitable. It's also worth noting that product quantization (compressing index entries for vectors) is expected for IVF in the next versions of pgvector which should significantly improve performance and lower resource requirements. Regardless of the type of index you use, we still recommend [reducing the number of dimensions](https://supabase.com/blog/fewer-dimensions-are-better-pgvector) in your embeddings when possible.
+
+## Start using HNSW
+
+All [new Supabase databases](https://database.new/) will automatically ship with pgvector v0.5.0 which includes the new HNSW indexes. If you have an existing database, please reach out to [support](https://supabase.com/dashboard/support/new) and we'll be more than happy to assist you with an upgrade.
+
+## More pgvector and AI resources
+
+- [How to build ChatGPT Plugin from scratch with Supabase Edge Runtime](https://supabase.com/blog/building-chatgpt-plugins-template)
+- [Docs pgvector: Embeddings and vector similarity](https://supabase.com/docs/guides/database/extensions/pgvector)
+- [Choosing Compute Add-on for AI workloads](https://supabase.com/docs/guides/ai/choosing-compute-addon)
+- [pgvector: Fewer dimensions are better](https://supabase.com/blog/fewer-dimensions-are-better-pgvector)
diff --git a/apps/www/_blog/2023-09-08-beta-update-august-2023.mdx b/apps/www/_blog/2023-09-08-beta-update-august-2023.mdx
new file mode 100644
index 0000000000000..a01abe581203a
--- /dev/null
+++ b/apps/www/_blog/2023-09-08-beta-update-august-2023.mdx
@@ -0,0 +1,135 @@
+---
+title: Supabase Beta August 2023
+description: Launch Week 8 review and more things we shipped 🚀
+author: ant_wilson
+image: 2023-09-07-beta-update-august-2023/monthly-update-august-2023.jpg
+thumb: 2023-09-07-beta-update-august-2023/monthly-update-august-2023.jpg
+categories:
+ - product
+tags:
+ - release-notes
+date: '2023-09-08'
+toc_depth: 3
+---
+
+Launch Week 8 breezed by, leaving behind a trail of new features to explore. Here is a recap of everything and an update of what else we are working on, like pgvector 0.5.0 with HNSW.
+
+## pgvector v0.5.0: Faster semantic search with HNSW indexes
+
+![pgvector v0.5.0: Faster semantic search with HNSW indexes](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/pgvector-0-5-0-hnsw.png)
+
+Supabase Vector is about to get a lot faster 💨. pgvector v0.5.0 adds Hierarchical Navigable Small World (HNSW), a new type of index that ensures lightning-fast vector searches, especially in high-dimensional spaces and embeddings.
+
+[Blog post](https://supabase.com/blog/increase-performance-pgvector-hnsw)
+
+## Day 1 - Hugging Face is now supported in Supabase
+
+![Day 1 - Hugging Face is now supported in Supabase](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-1.png?t=2023-09-08T07%3A12%3A07.012Z)
+
+We are all about open source collaboration, and Hugging Face is one of the open source communities we admire most. That’s why we've added Hugging Face support in our Python Vector Client and Edge Functions (Javascript) 🤗
+
+- [Blog post](https://supabase.com/blog/hugging-face-supabase)
+- [Video announcement](https://www.youtube.com/watch?v=RJccSbJ9Go4)
+
+## Day 2 - Supabase Local Dev: migrations, branching, and observability
+
+![Day 2 - Supabase Local Dev: migrations, branching, and observability](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-2.png?t=2023-09-08T07%3A12%3A43.796Z)
+
+The CLI received some serious upgrades including observability tools, streamlined backups, and enhanced migrations. But that's not all – the big game-changer is the introduction of Supabase branching which we’re rolling out to selected customers.
+
+- [Blog post](https://supabase.com/blog/supabase-local-dev)
+- [Video announcement](https://www.youtube.com/watch?v=N0Wb85m3YMI)
+
+## Day 3 - Supabase Studio 3.0
+
+![Day 3 - Supabase Studio 3.0](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-3.png?t=2023-09-08T07%3A13%3A17.511Z)
+
+Supabase Studio went to [#1 on Product Hunt](https://www.producthunt.com/products/supabase#ai-sql-editor-by-supabase) with some huge new features, including AI SQL editor, Schema diagrams, Wrappers UI, and a lot more!
+
+- [Blog post](https://supabase.com/blog/supabase-studio-3-0)
+- [Video announcement](https://www.youtube.com/watch?v=51tCMQPiitQ)
+
+## Day 4 - Supabase Integrations Marketplace
+
+![Day 4 - Supabase Integrations Marketplace](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-4-vercel.jpg?t=2023-09-08T07%3A13%3A43.691Z)
+
+With the release of OAuth2 applications, we've made it easier than ever for our partners to extend the Supabase platform with useful tooling 🙌
+
+- [Blog post](https://supabase.com/blog/supabase-integrations-marketplace)
+- [Video announcement](https://www.youtube.com/watch?v=gtJo1lTxHfs)
+
+## Day 4 - Vercel Integration 2.0 and Next.js App Router Support
+
+![Day 4 - Vercel Integration 2.0 and Next.js App Router Support](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-4.png?t=2023-09-08T07%3A14%3A19.166Z)
+
+The New Supabase x Vercel integration streamlines the process of creating, deploying, and maintaining web applications with several enhancements. Plus, it fully supports the App Router in Next.js ▲
+
+[Blog post](https://supabase.com/blog/using-supabase-with-vercel)
+
+## Day 5 - Supavisor: Scaling Postgres to 1 Million Connections
+
+![Day 5 - Supavisor: Scaling Postgres to 1 Million Connections](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-5-supavisor.jpg?t=2023-09-08T07%3A14%3A40.415Z)
+
+Supavisor is a scalable, cloud-native Postgres connection pooler written in Elixir. It has been developed with multi-tenancy in mind, handling millions of connections without significant overhead or latency. We’re rolling it out to every database on our platform.
+
+- [Blog post](https://supabase.com/blog/supavisor-1-million)
+- [Video announcement](https://www.youtube.com/watch?v=qzxzLSAJDfE)
+
+## Community Highlights from the past 4 months
+
+![Community Highlights from the past 4 months](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-5.jpg)
+
+Launch Week is an event for our community, so it’s a good time to look back at what happened in the last months (spoiler: a lot).
+
+[Blog post](https://supabase.com/blog/launch-week-8-community-highlights)
+
+## One more thing: HIPAA and SOC2 Type 2
+
+![One more thing: HIPAA and SOC2 Type 2](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/LW-Digest-5-compliance.jpg?t=2023-09-08T07%3A15%3A15.241Z)
+
+Supabase is officially SOC2 Type 2 and HIPAA compliant! In this write-up, we offer insights into what you can expect if you’re planning to go through the same process.
+
+[Blog post](https://supabase.com/blog/supabase-soc2-hipaa)
+
+## Launch Week 8 Hackathon Winners
+
+![Launch Week 8 Hackathon Winners](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/launch-week-8-hackathon)
+
+The decision was not easy, but after assessing a record number of submissions, the panel of judges chose [WITAS](https://github.com/alex-streza/witas) as the winner of the Best Overall project. As the name doesn't suggest, it's an acronym for Wait is that a sticker? In a nutshell, it generates stickers with Midjourney. Huge congrats to [Alex Streza](https://twitter.com/alex_streza) and [Catalina Melnic](https://twitter.com/Catalina_Melnic).
+
+- [Full list of winners](https://t.co/onYiaDmavb)
+- [All the submissions](https://www.madewithsupabase.com/hackathons/launch-week-8)
+
+## More product announcements!
+
+Shipping doesn’t stop here at Supabase! We are back in full shipping mode and already thinking about the next LW. These are some of the things we’ve been working on:
+
+- Updated and rewrote a bunch of docs: [Row-Level-Security](https://supabase.com/docs/guides/database/postgres/row-level-security), [Postgres Roles,](https://supabase.com/docs/guides/database/postgres/roles) [Database configuration](https://supabase.com/docs/guides/database/postgres/configuration).
+- Implemented read only UI for indexes. [PR](https://github.com/supabase/supabase/pull/16582)
+- Organization-based Billing, project transfers, team plan. [Blog post](https://supabase.com/blog/organization-based-billing)
+
+## Extended Community Highlights
+
+![Community Highlights](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/community-highlights.png)
+
+- Supabase’s Happy Hour made a comeback! Two new episodes of Alaister, Tyler, and Jon chatting about the latest news while live coding. [Episode #27](https://www.youtube.com/watch?v=OWhKVbg1p7Y) | [Episode #28](https://www.youtube.com/watch?v=_Z2f-gGrYu8)
+- The State of Postgres 2023 is live. Take the survey and help improve Postgres. [Survey](https://timescale.typeform.com/state-of-pg-23/)
+- Jon Meyers stopped by the PodRocket podcast to chat about everything we shipped for LW8. [Podcast](https://podrocket.logrocket.com/supabase-launch-week)
+- Supabase Crash Course for iOS Developers: Mikaela shows how to implement a Postgres database in a Swift project. [Video](https://www.youtube.com/watch?v=XBSiXROUoZk)
+- The Vite ecosystem conference is back and we are happy to be a Community Partner again. [Get your ticket](https://viteconf.org/23/ecosystem/supabase)
+- Building a real app with Tamagui and Supabase. [Video](https://www.youtube.com/watch?v=d32F7crxXsY)
+- Build PostgreSQL Databases Faster With Supabase AI SQL Editor. [Video](https://www.youtube.com/watch?v=ueCECQ24STI)
+- Creating Customized i18n-Ready Authentication Emails using Supabase Edge Functions, PostgreSQL, and Resend. [Blog post](https://blog.mansueli.com/creating-customized-i18n-ready-authentication-emails-using-supabase-edge-functions-postgresql-and-resend)
+- Expo Router Tabs with Supabase Authentication. [Video](https://www.youtube.com/watch?v=6IzrH-1M0uE&list=PL2PY2-9rsgl2DikpQG-GgO7TBgRtdB6NT&index=6)
+- Integrating Supabase with Prisma and TRPC: A Comprehensive Guide. [Tutorial](https://tobicode.hashnode.dev/integrating-supabase-with-prisma-and-trpc-a-comprehensive-guide)
+- A Supa-Introduction to Supabase. [Blog post](https://medium.com/@alex.streza/a-supa-introduction-to-supabase-e551ea6708e)
+- Authentication in Next.js with Supabase Auth and PKCE. [Tutorial](https://dev.to/mryechkin/authentication-in-nextjs-with-supabase-auth-and-pkce-45pk)
+- Implementing OAuth in Nuxt with Supabase. [Tutorial](https://dev.to/jacobandrewsky/implementing-oauth-in-nuxt-with-supabase-4p1k)
+
+## ⚠️ Baking hot meme zone ⚠️
+
+If you made it this far in the email you deserve a devilish treat.
+
+![Beta Update Meme August 2023](https://obuldanrptloktxcffvn.supabase.co/storage/v1/object/public/images/marketing-emails/august%202023/meme-beta-update-august.jpeg?t=2023-09-08T07%3A16%3A43.981Z)
+
+That's it for now, see you next month 👋
diff --git a/apps/www/_customers/chatbase.mdx b/apps/www/_customers/chatbase.mdx
new file mode 100644
index 0000000000000..590a8be9ea1dc
--- /dev/null
+++ b/apps/www/_customers/chatbase.mdx
@@ -0,0 +1,78 @@
+---
+name: Chatbase
+title: Bootstrapped founder builds an AI app with Supabase and scales to $1M in 5 months
+# Use meta_title to add a custom meta title. Otherwise it defaults to '{name} | Supabase Customer Stories':
+meta_title: Bootstrapped founder builds an AI app with Supabase and scales to $1M in 5 months
+description: How Yasser leveraged Supabase to build Chatbase and became one of the most successful single-founder AI products.
+# Use meta_description to add a custom meta description. Otherwise it defaults to {description}:
+meta_description: How Yasser leveraged Supabase to build Chatbase and became one of the most successful single-founder AI products.
+author: copple
+author_title: Supabase
+author_url: https://github.com/kiwicopple
+author_image_url: https://avatars2.githubusercontent.com/u/10214025?s=400&u=c6775be2ae667e2acae3ccd347fed62bb3f5b3e7&v=4
+authorURL: https://github.com/kiwicopple
+logo: /images/customers/logos/chatbase.png
+logo_inverse: /images/customers/logos/light/chatbase.png
+og_image: /images/customers/og/chatbase.jpg
+tags:
+ - supabase
+date: '2023-10-06'
+company_url: https://www.chatbase.co/
+stats: []
+misc:
+ [
+ { label: 'Use case', text: 'AI chatbot builder' },
+ {
+ label: 'Solutions',
+ text: 'Supabase Database, Supabase Auth, Supabase Storage, Supabase Realtime',
+ },
+ ]
+about: 'Chatbase is an AI chatbot builder. It trains ChatGPT on your data and lets you add a chat widget to your website. Just upload a document or add a link to your website and get a chatbot that can answer any question about their content.'
+---
+
+After getting a new grad offer from his “dream company” rescinded with tech layoffs, Yasser Elsaid decided to embark on a different path and bootstrap his own venture.
+
+Two months before the release of the ChatGPT API, Yasser explored the GPT3 API and saw its immense potential. Leveraging Supabase as the backend, he built Chatbase in two weeks and launched it in February.
+
+Just five months after launching, Chatbase reach $1,000,000 annualized revenue, making it one of the most successful single-founder AI products in the industry.
+
+## The Challenge
+
+Fascinated by the possibilities of GPT3, Yasser envisioned Chatbase as an AI-driven chatbot capable of handling complex customer queries in real time. However, building a cost-effective and reliable solution to ingest and store data and manage large-scale customer interactions by himself wasn’t trivial.
+
+Yasser faced another challenge: he needed to rapidly transform his idea into an actual product to maintain a competitive edge, but he didn’t have a team or funding.
+
+## Choosing Supabase
+
+Using Supabase for the first time, Yasser was immediately impressed by the developer experience and ease of use. He effortlessly implemented secure user authentication, relieving the pain of building such a system from scratch.
+
+But the most crucial aspect for us is Supabase's all-in-one solution. Chatbase relies on Supabase for [database](https://supabase.com/database), [authentication](https://supabase.com/auth), [storage](https://supabase.com/storage), and [real-time](https://supabase.com/realtime) functionality. Yasser believes he saved between 100 to 150 hours by using only 1 solution to build almost the entire backend, instead of spending that researching, learning, and implementing individual solutions for each component.
+
+With Supabase, everything seamlessly came together and he was able to go from idea to MVP in two weeks and launch soon after that.
+
+
+
+ Supabase is great because it has everything. I don’t need a different solution for
+ authentication, a different solution for database, or a different solution for storage.
+
+
+
+## What he built
+
+Chatbase is a powerful AI chatbot builder that enables users to train ChatGPT on their own data, revolutionizing the way businesses and individuals interact with customers. With seamless integration to over 5000 apps through Zapier, including popular platforms like Excel, Notion, Whatsapp, Discord, and Slack, Chatbase empowers everyone to deploy custom AI for their specific workflows.
+
+By simply uploading a PDF containing the desired information, Chatbase creates intelligent chatbots that automate customer support, handle frequently asked questions, and streamline communication channels, ultimately enhancing overall efficiency and user experience.
+
+Chatbase became a sensation, crossing 2,000,000 visitors in a matter of months and now has 2,200 paying customers.
+
+## The Results
+
+Leveraging Supabase as the backend gave Yasser the speed and efficiency to turn into a significant advantage over the competition, enabling him to gain a strong advantage in the market and establish Chatbase as one of the pioneering AI chatbot builders.
+
+As Chatbase gained traction and acquired users, Supabase's robust infrastructure seamlessly accommodated the surging user base, ensuring smooth scalability and uninterrupted growth.
+
+Within just five months, Chatbase grew to an astonishing $1,000,000 in annualize revenue.
+
+Chatbase is still bootstrapped, but now Yasser has a team of 5 who help with customer support, marketing, and development. He is now exploring Supabase's capabilities for achieving GDPR compliance and implementing Vault for end-to-end encryption.
+
+> To learn more about how Supabase can help you build and scale AI apps with ease, [reach out to us](https://forms.supabase.com/enterprise).
diff --git a/apps/www/_customers/markprompt.mdx b/apps/www/_customers/markprompt.mdx
index d5679d27b8f1b..7e5b33a380fec 100644
--- a/apps/www/_customers/markprompt.mdx
+++ b/apps/www/_customers/markprompt.mdx
@@ -1,5 +1,6 @@
---
title: Markprompt and Supabase - GDPR-compliant AI chatbots for docs and websites.
+meta_title: GDPR-compliant AI chatbots for docs and websites.
name: Markprompt
description: AI-powered chatbot platform, Markprompt, empowers developers to deliver efficient and GDPR-compliant prompt experiences on top of their content, by leveraging Supabase's secure and privacy-focused vector database and authentication solutions.
author: paul_copplestone
diff --git a/apps/www/_customers/mendableai.mdx b/apps/www/_customers/mendableai.mdx
index 73d2523387ba7..419ba1d054e36 100644
--- a/apps/www/_customers/mendableai.mdx
+++ b/apps/www/_customers/mendableai.mdx
@@ -2,7 +2,7 @@
name: Mendable
title: Mendable switches from Pinecone to Supabase Vector for PostgreSQL vector embeddings.
# Use meta_title to add a custom meta title. Otherwise it defaults to '{name} | Supabase Customer Stories':
-# meta_title: Mendable switches from Pinecone to Supabase Vector for PostgreSQL vector embeddings.
+meta_title: Mendable switches from Pinecone to Supabase Vector for PostgreSQL vector embeddings.
description: How Mendable boosts efficiency and accuracy of chat powered search for documentation using Supabase Vector.
# Use meta_description to add a custom meta description. Otherwise it defaults to {description}:
meta_description: How Mendable boosts efficiency and accuracy of chat powered search for documentation using Supabase Vector.
diff --git a/apps/www/_customers/quivr.mdx b/apps/www/_customers/quivr.mdx
new file mode 100644
index 0000000000000..eb0e83f49dd96
--- /dev/null
+++ b/apps/www/_customers/quivr.mdx
@@ -0,0 +1,78 @@
+---
+name: Quivr
+title: Quivr launch 5,000 Vector databases on Supabase.
+# Use meta_title to add a custom meta title. Otherwise it defaults to '{name} | Supabase Customer Stories':
+meta_title: Quivr launch 5,000 Vector databases on Supabase.
+description: Learn how one of the most popular Generative AI projects uses Supabase as their Vector Store.
+# Use meta_description to add a custom meta description. Otherwise it defaults to {description}:
+meta_description: Learn how one of the most popular Generative AI projects uses Supabase as their Vector Store.
+author: rory_wilding
+author_title: Supabase
+author_url: https://github.com/kiwicopple
+author_image_url: https://avatars2.githubusercontent.com/u/10214025?s=400&u=c6775be2ae667e2acae3ccd347fed62bb3f5b3e7&v=4
+authorURL: https://github.com/kiwicopple
+logo: /images/customers/logos/quivr.png
+logo_inverse: /images/customers/logos/light/quivr.png
+og_image: /images/customers/og/quivr.jpg
+tags:
+ - supabase
+date: '2023-10-05'
+company_url: https://quivr.app
+stats: []
+misc:
+ [
+ { label: 'Use case', text: 'Generative AI' },
+ { label: 'Solutions', text: 'Supabase Vector, Supabase Auth' },
+ ]
+about: "Quivr is an open source 'second brain'. It's like a private ChatGPT, personalized with your own data: you upload your documents and you can then search and ask questions using generative AI."
+---
+
+## The challenge: Building a second brain
+
+In May of 2023, [Stan Girard's](https://twitter.com/_StanGirard) started building small prototypes that allowed him to “chat with documents”. After 2 weeks of research, he settled on an idea - build a “second brain” where a user could dump all their digital knowledge (audio, URLs, text, and code) into a vector store and query it with GPT4.
+
+He built the first version in a single afternoon, pushed it to GitHub, and then [tweeted about it](https://twitter.com/_StanGirard/status/1657021618571313155?s=20). One viral tweet later, and [Quivr](https://github.com/StanGirard/quivr) was born.
+
+## Choosing a vector database
+
+A critical piece of the tech stack was the vector store. Stan needed a place to store millions of embeddings. After comparing between Supabase, Pinecone, and Chroma, he settled on [Supabase Vector](https://supabase.com/vector), our open source vector offering for developing AI applications. The decision was driven largely by his familiarity with Postgres, and the tight integration with Vercel.
+
+
+
+ Supabase Vector powered by pgvector allowed us to create a simple and efficient product. We are
+ storing over 1.6 million embeddings and the performance and results are great. Open source
+ develop can easily contribute thanks to the SQL syntax known by millions of developers.
+
+
+
+## Building an open source community
+
+It didn't take long for the Quivr community to grow. After the viral launch, the [Quivr repo](https://github.com/StanGirard/quivr) stayed at number 1 on [GitHub Trending](https://github.com/trending) for 4 days. Today, it has over 22,000 GitHub stars and 67 contributors. Supabase has been a key part of the open source stack since the beginning.
+
+
+
+ Because Supabase is open source, the possibility of running it locally made it a better choice
+ compared with other products like Auth0. Since Auth is integrated with the Vector database it
+ made Quivr much simpler. Features like Storage and Edge Functions allowed us to expand Quivr's
+ functionality while keeping the project simple.
+
+
+
+## Launching 5000 databases
+
+One of the most pivotal growth events was getting picked up by [an influential YouTuber](https://www.youtube.com/watch?v=rFEbz93G9U8). His 11-minute overview of Quivr launched over 2000 Quivr projects on Supabase in one week. There are now 5,100 Quivr databases on Supabase, making it one of the most influential communities on the Supabase platform.
+
+## Launching a hosted product
+
+Stan also launched a [hosted version of Quivr](https://www.quivr.app/), for users to sign up and get started immediately, without requiring any self-hosting infrastructure. Quivr's open source success has translated through their hosted platform, with 17,000 signups in just over 2 months, with 200 new users joining every day. The hosted database provides embedding storage for 1.6 million vectors and similarity search for over 100,000 files.
+
+With 500 daily active users, the [Quivr.app](http://Quivr.app) is becoming the preferred way for users to create a second brain.
+
+## Tech Stack
+
+- Backend: Fast API + Langchain, hosted on AWS Fargate
+- Frontend: Next.js, hosted on Vercel
+- Database: Supabase Vector, using pgvector
+- LLM: OpenAI, Anthropic, Nomic
+- Semantic search using GPT For ALL, Anthropic, and OpenAI
+- Auth: Supabase Auth
diff --git a/apps/www/components/Nav/Developers.tsx b/apps/www/components/Nav/Developers.tsx
index 71061cd2fc85a..9fdc46c12bff7 100644
--- a/apps/www/components/Nav/Developers.tsx
+++ b/apps/www/components/Nav/Developers.tsx
@@ -60,11 +60,11 @@ const Developers = () => {
return (