Skip to content

Commit

Permalink
Update postage-stamp.md
Browse files Browse the repository at this point in the history
reworded and corrected here and there
added a caveat section at the end about size in chunks vs size in bytes
  • Loading branch information
zelig authored Oct 10, 2023
1 parent a1fb1a5 commit bb3362a
Showing 1 changed file with 9 additions and 8 deletions.
17 changes: 9 additions & 8 deletions docs/learn/technology/contracts/postage-stamp.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ id: postage-stamp
import DepthCalc from '@site/src/components/DepthCalc';
import BatchCostCalc from '@site/src/components/BatchCostCalc';

The [postage stamp contract](https://github.com/ethersphere/storage-incentives/blob/master/src/PostageStamp.sol) is a smart contract which is a key part of Swarm's [storage incentives](/docs/learn/technology/incentives) which make up the foundation of Swarm's self-sustaining economic system.
The [postage stamp contract](https://github.com/ethersphere/storage-incentives/blob/master/src/PostageStamp.sol) is one component in the suite of smart contract orchestrating Swarm's [storage incentives](/docs/learn/technology/incentives) which make up the foundation of Swarm's self-sustaining economic system.

When a node uploads data to Swarm, it 'attaches' postage stamps to each [chunk](/docs/learn/technology/DISC) of data. Postage stamps are issued in batches rather than one by one. The value assigned to a stamp indicates how much it is worth to persist the associated data on Swarm, which nodes use to prioritize which chunks to remove from their reserve first.
When a node uploads data to Swarm, it 'attaches' postage stamps to each [chunk](/docs/learn/technology/DISC) of data. Postage stamps are purchased in batches rather than one by one. The value assigned to a stamp indicates how much it is worth to persist the associated data on Swarm, which nodes use to prioritize which chunks to remove from their reserve first.

The value of a postage stamp decreases over time as if storage rent was regularly deducted from the batch balance. Once the value of a stamp is no longer sufficient, the associated chunk is evicted from the reserve.
The value of a postage stamp decreases over time as if storage rent was regularly deducted from the batch balance. We say that a stamp expires when the batch it is issued from has insufficient balance. a chunk with an expired stamp can not be used in the proof of entitlement storer nodes need to submit in order to get compensated for their contributed storage space, therefore such expired chunks are evicted from the reserve from nodes' reserves and put into cache which leave their sustanance up to their popularity.

## Batch Buckets

Postage stamps are issued in batches with a certain number of storage slots divided amongst $$2^{bucketDepth}$$ equally sized address space buckets. Each bucket is responsible for storing chunks that fall within a certain range of the address space. When uploaded, files are split into 4kb chunks, each chunk is assigned a unique address, and each chunk is then assigned to the bucket in which its address falls. While the value of `bucket depth` is not defined in The Book of Swarm, in its current implementation in the Bee client, `bucket depth` has been set to 16, so there are a total of 65,536 buckets.
Postage stamps are issued in batches with a certain number of storage slots partitioned into $$2^{bucketDepth}$$ equally sized address space buckets. Each bucket is responsible for storing chunks that fall within a certain range of the address space. When uploaded, files are split into 4kb chunks, each chunk is assigned a unique address, and each chunk is then assigned to the bucket in which its address falls. Falling into the same range means match on `n` leading bits. This restriction is necessary to ensure (incentivise) uniform utilisation of the address space and is fair since the distribution of content addresses are uniform as well. Uniformity depth is the number of leading bits determining bucket membership (also called `bucket depth`). The uniformity depth is set to 16, so there are a total of `2^16 = 65,536` buckets.

### Bucket Size

Expand Down Expand Up @@ -134,9 +134,6 @@ The details of how the effective rates of utilisation are calculated will be pub

### Effective Utilisation Table

:::info
This table is based on preliminary calculations and may be subject to change.
:::

The provided table shows the effective volume for each batch depth from 20 to 41. The "utilisation rate" is the rate of utilisation a stamp batch can reach with a 0.1% failure rate (that is, there is a 1/1000 chance the batch will become fully utilised before reaching that utilisation rate). The "effective volume" figure shows the actual amount of data which can be stored at the effective rate. The effective volume figure is the one which should be used as the de-facto maximum amount of data that a batch can store before becoming either fully utilised (for immutable batches), or start overwriting older chunks (mutable batches).

Expand Down Expand Up @@ -165,5 +162,9 @@ The provided table shows the effective volume for each batch depth from 20 to 41
| 40 | 99.86% | 4.50 PB | 4.50 PB |
| 41 | 99.90% | 9.01 PB | 9.00 PB |

:::info
This table is based on preliminary calculations and may be subject to change.
:::


Nodes' storage is actually calculated in the number of chunks, which we counted as 4k (2^12 bytes) each,but in fact some SOC chunks can be a few bytes longer, and some chunks can be smaller, so the conversion is not precise. On the other hand, when a user buys a capacity they usually expect to be able to upload files with a sum total size of that capacity. However, the way swarm represents files in a merkle tree, the intermediate chunks are additional overhead you need to calculate with.
Besides this, when the node stores the chunks it uses additional indexes therefore the disk space the maximally filled reserve would demand cannot be calculated with perfect accuracy.

0 comments on commit bb3362a

Please sign in to comment.