Skip to content

Commit

Permalink
Merge pull request #2577 from neondatabase/dprice-dec-billing-updates
Browse files Browse the repository at this point in the history
billing updates
  • Loading branch information
danieltprice authored Dec 1, 2024
2 parents 39165d2 + 6880575 commit 84b72a8
Show file tree
Hide file tree
Showing 20 changed files with 255 additions and 259 deletions.
2 changes: 1 addition & 1 deletion content/docs/ai/ai-scale-with-neon.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ To optimize `pgvector` index build time, you can increase the `maintenance_work_
SET maintenance_work_mem='10 GB';
```

The recommended `maintenance_work_mem` setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GiB is less than 50% of the RAM available for that compute size.
The recommended `maintenance_work_mem` setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size.

## Autoscaling

Expand Down
4 changes: 2 additions & 2 deletions content/docs/extensions/pgvector.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ Like other index types, it’s faster to create an index after loading your init
SET maintenance_work_mem='10 GB';
```

The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GiB is less than 50% of the RAM available for that compute size.
The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size.

- `max_parallel_maintenance_workers`

Expand Down Expand Up @@ -434,7 +434,7 @@ Like other index types, it’s faster to create an index after loading your init
SET maintenance_work_mem='10 GB';
```

The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GiB is less than 50% of the RAM available for that compute size.
The recommended setting is your working set size (the size of your tuples for vector index creation). However, your `maintenance_work_mem` setting should not exceed 50 to 60 percent of your compute's available RAM (see the table above). For example, the `maintenance_work_mem='10 GB'` setting shown above has been successfully tested on a 7 CU compute, which has 28 GB of RAM, as 10 GB is less than 50% of the RAM available for that compute size.

- `max_parallel_maintenance_workers`

Expand Down
2 changes: 1 addition & 1 deletion content/docs/get-started-with-neon/dev-experience.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ You can build your database branching workflows using the Neon CLI, Neon API, or
neon branches create --name dev/alex
```

Neon's copy-on-write technique makes branching instantaneous and cost-efficient. Whether your database is 1 GiB or 1 TiB, [it only takes seconds to create a branch](https://neon.tech/blog/how-to-copy-large-postgres-databases-in-seconds), and Neon's branches are full database copies, not partial or schema-only.
Neon's copy-on-write technique makes branching instantaneous and cost-efficient. Whether your database is 1 GB or 1 TiB, [it only takes seconds to create a branch](https://neon.tech/blog/how-to-copy-large-postgres-databases-in-seconds), and Neon's branches are full database copies, not partial or schema-only.

Also, with Neon, you can easily keep your development branches up-to-date by resetting your schema and data to the latest from `main` with a simple command.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/guides/autoscaling-algorithm.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ An important part of the scaling algorithm is estimating your current working se
Every 20 seconds, the autoscaler-agent checks the working set size across a variety of time windows, ranging from 1 to 60 minutes. The goal is to fit your working set within 75% of the compute’s RAM allocated to the LFC. If your working set exceeds this threshold, the algorithm increases compute size to expand the LFC, keeping frequently accessed data in memory for faster access. To learn more about how we do this, see [Dynamically estimating and scaling Postgres’ working set size](https://neon.tech/blog/dynamically-estimating-and-scaling-postgres-working-set-size).

<Admonition type="note">
If your dataset is small enough, you can improve performance by keeping the entire dataset in memory. Check your database size on the Monitoring [dashboard](/docs/introduction/monitoring-page#database-size) and adjust your minimum compute size accordingly. For example, a 6.4 GiB database can comfortably fit within a compute size of 2 vCPU with 8 GB of RAM (where the LFC can use up to 80% of the available RAM).
If your dataset is small enough, you can improve performance by keeping the entire dataset in memory. Check your database size on the Monitoring [dashboard](/docs/introduction/monitoring-page#database-size) and adjust your minimum compute size accordingly. For example, a 6.4 GB database can comfortably fit within a compute size of 2 vCPU with 8 GB of RAM (where the LFC can use up to 80% of the available RAM).
</Admonition>

## How often the metrics are polled
Expand Down
10 changes: 5 additions & 5 deletions content/docs/guides/partner-consumption-limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,12 +60,12 @@ Let's say you want to set limits for an application with two tiers, Trial and Pr
| -------------------- | -------------------------------- | ------------------------------------------------- |
| active_time_seconds | 633,600 (business month 22 days) | 2,592,000 (30 days) |
| compute_time_seconds | 158,400 (approx 44 hours) | 10,368,000 (4 times the active hours for 4 vCPUs) |
| written_data_bytes | 1,000,000,000 (approx. 1 GiB) | 50,000,000,000 (approx. 50 GiB) |
| data_transfer_bytes | 500,000,000 (approx. 500 MiB) | 10,000,000,000 (approx. 10 GiB) |
| written_data_bytes | 1,000,000,000 (approx. 1 GB) | 50,000,000,000 (approx. 50 GB) |
| data_transfer_bytes | 500,000,000 (approx. 500 MB) | 10,000,000,000 (approx. 10 GB) |

| Parameter (branch) | Trial | Pro |
| ------------------ | ----------------------------- | ------------------------------- |
| logical_size_bytes | 100,000,000 (approx. 100 MiB) | 10,000,000,000 (approx. 10 GiB) |
| Parameter (branch) | Trial | Pro |
| ------------------ | ----------------------------- | ------------------------------ |
| logical_size_bytes | 100,000,000 (approx. 100 MiB) | 10,000,000,000 (approx. 10 GB) |

### Guidelines

Expand Down
2 changes: 1 addition & 1 deletion content/docs/guides/vercel-postgres-transition-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ The Vercel Pro plan is is tailored for professional developers, freelancers, and
| Compute Time | 100 Hours | 300 Hours |
| Data Transfer | 256 MB | Reasonable usage (no hard limit) |
| Database | First Database | 1000 |
| Storage | First 256 MB | Up to 10 GiB |
| Storage | First 256 MB | Up to 10 GB |

Both the Vercel Pro and Neon Launch plans offer additional use (called "Extra usage" in Neon) for a fee, as outlined below. In Neon, additional units of compute and storage cost more, but you get more compute and storage with your plan's monthly fee, and Neon does not charge for data transfer, additional databases, or written data.

Expand Down
20 changes: 10 additions & 10 deletions content/docs/introduction/billing-sample.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ updatedOn: '2024-08-12T11:07:15.292Z'

## Generative AI example

To give you a clearer sense of how billing works, let's explore a real-world example. Consider a simple image generation app that leverages Neon as the serverless database for storing user authentication details as well as records of image generation per user. Analyzing this usage over a monthly billing period can help you understand the nuances of Neon billing based on actual scenarios and choose the right pricing plan.
To give you a clearer sense of how billing works, let's explore a real-world example. Consider a simple image generation app that leverages Neon as the serverless database for storing user authentication details as well as records of image generation per user. Analyzing this usage over a monthly billing period can help you understand Neon billing based on actual scenarios and choose the right pricing plan.

## Overview: Costs by usage

Expand Down Expand Up @@ -35,7 +35,7 @@ Given the high number of connections used by this application, [connection pooli
### Compute hours and storage:

- **Compute hours.** This is the metric Neon uses to track compute usage. 1 compute hour is equal to 1 active hour for a compute with 1 vCPU. If you have a compute with .25 vCPU, as you do in this sample scenario, it takes 4 active hours to use 1 compute hour. You can use this formula to calculate compute hour usage: `compute hours = compute size * active hours`. The average daily number of active hours is 23.94, totaling 718.35 active hours for the sample month. This indicates steady but low-intensity database usage.
- **Storage.** The amount of database storage currently used by your project. It includes the total volume of data across all branches plus a history of database changes. The amount of history retained is defined by your chosen [history retention period](/docs/manage/projects#configure-history-retention). The storage size in this sample scenario is now over 25 GiB and growing steadily with new written data as the user base grows.
- **Storage.** The amount of database storage currently used by your project. It includes the total volume of data across all branches plus a history of database changes. The amount of history retained is defined by your chosen [history retention period](/docs/manage/projects#configure-history-retention). The storage size in this sample scenario is now over 25 GB and growing steadily with new written data as the user base grows.

## Usage breakdown for the month

Expand All @@ -51,7 +51,7 @@ A daily average of 23.94 active hours amounts to 713.35 active hours for the mon

### Storage

Project storage grew 4.4 GiB over the month, from 23.6 GiB to 28 GiB.
Project storage grew 4.4 GB over the month, from 23.6 GB to 28 GB.

![Sample storage graph](/docs/introduction/billing_storage_graph.png)

Expand All @@ -65,30 +65,30 @@ Here are the monthly totals for compute and storage usage.

| Metric | Start of the month | End of the month |
| :------ | :----------------- | :--------------- |
| Storage | 23.6 GiB | 28 GiB |
| Storage | 23.6 GB-month | 28 GB-month |

### Which Neon pricing plan fits best?

At roughly 718 active hours for the month with a compute size of 0.25 vCPU, this application is well under the 300 compute hours (1,200 active hours)/month allowance for the [Launch](/docs/introduction/plans##launch) plan and 750 compute hours (3000 active hours)/month for the [Scale](/docs/introduction/plans#scale) plan. However, with a storage size of 25 GiB, the storage requirements for the application are over the Launch plan allowance of 10 GiB. You could go with the Launch plan which offers 10 GiB of storage plus extra storage at $3.50 per 2 GiB unit or the Scale plan which offers 50 GiB storage. Let's do that math to compare monthly bills:
At roughly 718 active hours for the month with a compute size of 0.25 vCPU, this application is well under the 300 compute hours (1,200 active hours)/month allowance for the [Launch](/docs/introduction/plans##launch) plan and 750 compute hours (3000 active hours)/month for the [Scale](/docs/introduction/plans#scale) plan. However, with a storage size of 25 GB, the storage requirements for the application are over the Launch plan allowance of 10 GB-month. You could go with the Launch plan which offers 10 GB-month of storage plus extra storage at $1.75 per GB-month or the Scale plan which offers 50 GB-month storage. Let's do that math to compare monthly bills:

**Launch plan**:

- Base fee: $19
- Storage usage: 25 GiB (15 GiB over the allowance)
- Storage usage: 25 GB-month (15 GB-month over the allowance)
- Compute usage: 718 active hours (within the 300 compute hour/1200 active hour allowance)
- Extra storage fee: 8 \* $3.50 = $28
- Extra storage fee: 15 \* $1.75 = $26.25
- Extra compute fee: $0

_Total estimate_: $19 + $28 = $47 per month
**Total estimate**: $19 + $26.25 = **$45.25 per month**

**Scale plan**:

- Base fee: $69
- Storage usage: 25 GiB (within the 50 GiB allowance)
- Storage usage: 25 GB-month (within the 50 GB-month allowance)
- Compute usage: 718 active hours (within the 750 compute hour/3000 active hour allowance)
- Extra storage fee: $0
- Extra compute fee: $0

_Total estimate_: $69 per month

The Launch plan is more economical in the short term, but you might consider upgrading to the [Scale](/docs/introduction/plans#scale) plan when purchasing extra storage on the Launch plan is no longer cheaper than moving up to the $69 per month Scale plan. The Scale plan has a higher monthly storage allowance (50 GiB) and a cheaper per-unit extra storage cost (10 GiB at $15 vs. 2 GiB at $3.5). The Scale plan also offers additional features and more projects, which may factor into your decision about when to upgrade.
The Launch plan is more economical in the short term, but you might consider upgrading to the [Scale](/docs/introduction/plans#scale) plan when purchasing extra storage on the Launch plan is no longer cheaper than moving up to the $69 per month Scale plan. The Scale plan has a higher monthly storage allowance (50 GB-month) and a cheaper per-unit extra storage cost (1 GB-month at $1.50 on Scale vs. 1 GB-month at $1.75 on Launch). The Scale plan also offers additional features and more projects, which may factor into your decision about when to upgrade.
Loading

0 comments on commit 84b72a8

Please sign in to comment.