Skip to content

Commit

Permalink
resolve conflict
Browse files Browse the repository at this point in the history
  • Loading branch information
NoahMaizels committed Oct 13, 2023
2 parents 25b68c7 + c369040 commit e17ad8f
Show file tree
Hide file tree
Showing 4 changed files with 37 additions and 41 deletions.
8 changes: 8 additions & 0 deletions docs/bee/installation/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,14 @@ resolver-options: ["https://mainnet.infura.io/v3/<<your-api-key>>"]
```
### Set Target Neighborhood (Optional)
When setting up a new Bee node, a randomly generated overlay address will determine the node's [neighborhood](/docs/learn/technology/disc#neighborhoods). By using the `target-neighborhood` config option, however, an overlay address will be generated which falls within a specific neighborhood. There are two good reasons for doing this. First, by choosing a lesser populated neighborhood, a node's chances of winning rewards can be increased. Second, choosing to set up a node in a less populated neighborhood will strengthen the resiliency of the Swarm network. Therefore it is recommended to use the `target-neighborhood` option.
To use this option, it's first necessary to identify potential target neighborhoods. A convenient tool for finding underpopulated neighborhoods is available at the [Swarmscan website](https://swarmscan.io/neighborhoods). This tool returns the leading bits of target neighborhoods in order of least populated to most. Simply copy the leading bits from one of the least populated neighborhoods and use it to set `target-neighborhood`. After doing so, an overlay address within that neighborhood will be generated when starting Bee for the first time.
See the [staking page](/docs/bee/working-with-bee/staking) for more information.
## 3. Find Bee address
As part of the process of starting a Bee full node or light node the node must issue a Gnosis Chain transaction which is paid for using xDAI. We therefore need to find our node's Gnosis Chain address. We can find it by reading it directly from our key file:
Expand Down
2 changes: 1 addition & 1 deletion docs/desktop/postage-stamps.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Batch [depth and amount](docs/learn/technology/contracts/postage-stamp) are the
Inputting a value for depth allows you to preview the upper limit of data which can be uploaded for that depth.

:::info
Note that a batch will become fully utilized before the upper limit has been reached, so the actual amount of data which can be uploaded is lower than the limit. At higher depth values, this becomes less of a problem as batch utilization will come closer to the upper limit on average as the depth increases. For this reason, Swarm Desktop requires a minimum batch depth of 17.
Note that a batch will become fully utilized before the upper limit has been reached, so the actual amount of data which can be uploaded is lower than the limit. At higher depth values, this becomes less of a problem as batch utilization will come closer to the upper limit on average as the depth increases. For this reason, Swarm Desktop requires a minimum batch depth of 24.
:::

Inputting a value for amount and depth together will allow you to also preview the total cost of the postage stamp batch as well as the TTL (time to live - how long the batch can store data on Swarm). Click the ***Buy New Stamp*** button to purchase the stamp batch.
Expand Down
25 changes: 11 additions & 14 deletions docs/develop/access-the-swarm/buy-a-stamp-batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,16 @@ $$2^{batch \_ depth} \times {amount}$$
The paid xBZZ forms the `balance` of the batch. This `balance` is then slowly depleted as time ticks on and blocks are mined on Gnosis Chain.


For example, with a `batch depth` of 20 and an `amount` of 1000000000 PLUR:
For example, with a `batch depth` of 24 and an `amount` of 1000000000 PLUR:

$$
2^{20} \times 1000000000 = 1048576000000000 \text{ PLUR} = 0.1048576 \text{ xBZZ}
2^{24} \times 1000000000 = 16777216000000000 \text{ PLUR} = 1.6777216 \text{ xBZZ}
$$

### Batch Depth

The `batch depth` determines _how many chunks_ are allowed to be in each bucket. The number of chunks allowed in each bucket is calculated like so:
$$2^{batch \_ depth - bucket \_ depth}$$ $$=$$ $$2^{batch \_ depth - 16}$$. With a minimum `batch depth` of 17.
$$2^{batch \_ depth - bucket \_ depth}$$ $$=$$ $$2^{batch \_ depth - 16}$$. With a minimum `batch depth` of 24.


### Calculating the `depth` and `amount` of your batch of stamps
Expand All @@ -60,15 +60,12 @@ Right now, the easiest way to start uploading content is to buy a large enough b

The `amount` you specify will govern the amount of time your chunks live in Swarm. Because pricing is variable, it is not possible to predict with accuracy exactly when your chunks will run out of balance, however, it can be estimated based on the current price and the remaining batch balance.

For now, we suggest you specify `batch depth` 20 and `amount` 10000000000 for your
batches just to get started. This should be enough to upload several GB of data for a few weeks.

:::warning
When you purchase a batch of stamps, you agree to burn xBZZ. Although your 'balance' slowly decrements as time goes on, there is no way to withdraw xBZZ from a batch. This is an outcome of Swarm's decentralised design, to read more about the different components of Swarm fit together, read <a href="https://www.ethswarm.org/The-Book-of-Swarm.pdf" target="_blank" rel="noopener noreferrer">The Book of Swarm</a> .
:::

```bash
curl -s -XPOST http://localhost:1635/stamps/10000000000/20
curl -s -XPOST http://localhost:1635/stamps/10000000000/24
```

:::info
Expand Down Expand Up @@ -109,7 +106,7 @@ The remaining *time to live* in seconds is shown in the returned json object as
"utilization": 0,
"usable": true,
"label": "",
"depth": 20,
"depth": 24,
"amount": "113314620",
"bucketDepth": 16,
"blockNumber": 19727733,
Expand Down Expand Up @@ -137,7 +134,7 @@ curl -X PATCH "http://localhost:1635/stamps/topup/6d32e6f1b724f8658830e51f8f57aa

In order to store more data with a batch of stamps, you must "dilute" the batch. Dilution simply refers to increasing the depth of the batch, thereby allowing it to store a greater number of chunks. As dilution only increases the the depth of a batch and does not automatically top up the batch with more xBZZ, dilution will decrease the TTL of the batch. Therefore if you wish to store more with your batch but don't want to decrease its TTL, you will need to both dilute and top up your batch with more xBZZ.

Here we call the `/stamps` endpoint and find a batch with `depth` 17 and a `batchTTL` of 2083223 which we wish to dilute:
Here we call the `/stamps` endpoint and find a batch with `depth` 24 and a `batchTTL` of 2083223 which we wish to dilute:

```bash
curl http://localhost:1635/stamps
Expand All @@ -151,7 +148,7 @@ curl http://localhost:1635/stamps
"utilization": 0,
"usable": true,
"label": "",
"depth": 17,
"depth": 24,
"amount": "10000000000",
"bucketDepth": 16,
"blockNumber": 29717348,
Expand All @@ -164,10 +161,10 @@ curl http://localhost:1635/stamps
}
```

Next we call the [`dilute`](/api/#tag/Postage-Stamps/paths/~1stamps~1dilute~1{batch_id}~1{depth}/patch) endpoint to increase the `depth` of the batch using the `batchID` and our new `depth` of 19:
Next we call the [`dilute`](/api/#tag/Postage-Stamps/paths/~1stamps~1dilute~1{batch_id}~1{depth}/patch) endpoint to increase the `depth` of the batch using the `batchID` and our new `depth` of 26:

```bash
curl -s -XPATCH http://localhost:1635/stamps/dilute/0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5/19
curl -s -XPATCH http://localhost:1635/stamps/dilute/0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5/26
```
And a `txHash` of our successful transaction:

Expand All @@ -184,7 +181,7 @@ And finally we use the `/stamps` endpoint again to confirm the new `depth` and d
curl http://localhost:1635/stamps
```

We can see the new `depth` of 19 decreased `batchTTL` of 519265.
We can see the new `depth` of 26 and a decreased `batchTTL` of 519265.

```json
{
Expand All @@ -194,7 +191,7 @@ We can see the new `depth` of 19 decreased `batchTTL` of 519265.
"utilization": 0,
"usable": true,
"label": "",
"depth": 19,
"depth": 26,
"amount": "10000000000",
"bucketDepth": 16,
"blockNumber": 29717348,
Expand Down
43 changes: 17 additions & 26 deletions docs/learn/technology/contracts/postage-stamp.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ title: Postage Stamp
id: postage-stamp
---

import DepthCalc from '@site/src/components/DepthCalc';
import BatchCostCalc from '@site/src/components/BatchCostCalc';

The [postage stamp contract](https://github.com/ethersphere/storage-incentives/blob/master/src/PostageStamp.sol) is one component in the suite of smart contract orchestrating Swarm's [storage incentives](/docs/learn/technology/incentives) which make up the foundation of Swarm's self-sustaining economic system.

When a node uploads data to Swarm, it 'attaches' postage stamps to each [chunk](/docs/learn/technology/DISC) of data. Postage stamps are purchased in batches rather than one by one. The value assigned to a stamp indicates how much it is worth to persist the associated data on Swarm, which nodes use to prioritize which chunks to remove from their reserve first.
Expand All @@ -18,33 +15,30 @@ Postage stamps are issued in batches with a certain number of storage slots part

### Bucket Size

Each bucket has a certain number of slots which can be "filled" by chunks (In other words, for each bucket, a certain number of chunks can be stamped). Once all the slots of a bucket are filled, the entire postage batch will be fully utilised and can no longer be used to upload additional data. Given the constant `bucket depth` of 16, for a `batch depth` of 20, the number of chunks per bucket is calculated like so:
Each bucket has a certain number of slots which can be "filled" by chunks (In other words, for each bucket, a certain number of chunks can be stamped). Once all the slots of a bucket are filled, the entire postage batch will be fully utilised and can no longer be used to upload additional data.

Together with `batch depth`, `bucket depth`determines how many chunks are allowed in each bucket. The number of chunks allowed in each bucket is calculated like so:

$$
2^{(batchDepth - bucketDepth)}
$$

So with a batch depth of 20 and a bucket depth of 16:
So with a batch depth of 24 and a bucket depth of 16:

$$
2^{(20 - 16)} = 2^{4} = 16 \text{ chunks/bucket}
2^{(24 - 16)} = 2^{8} = 256 \text{ chunks/bucket}
$$





## Batch Depth and Batch Amount

Each batch of stamps has two key parameters, `batch depth` and `amount`, which are recorded on Gnosis Chain at issuance. Note that these "depths" do not refer to the depth terms used to describe topology which are outlined [here in the glossary](/docs/learn/glossary#depth-types).

### Batch Depth

`Batch depth` determines how much data can be stored by a batch. The number of chunks which can be stored (stamped) by a batch is equal to $$2^{depth}$$.
`Batch depth` determines how much data can be stored by a batch. The number of chunks which can be stored (stamped) by a batch is equal to $$2^{batchDepth}$$.

For a batch with a `batch depth` of 23, a maximum of $$2^{23} = 33,554,432$$ chunks can be stamped.
For a batch with a `batch depth` of 24, a maximum of $$2^{24} = 16,777,216$$ chunks can be stamped.

Since we know that one chunk can store 4 kb of data, we can calculate the theoretical maximum amount of data which can be stored by a batch from the `batch depth`.

Expand All @@ -65,13 +59,9 @@ The paid xBZZ forms the `balance` of the batch. This `balance` is then slowly de
For example, with a `batch depth` of 20 and an `amount` of 1000000000 PLUR:

$$
2^{20} \times 1000000000 = 1048576000000000 \text{ PLUR} = 0.1048576 \text{ xBZZ}
2^{24} \times 1000000000 = 16777216000000000 \text{ PLUR} = 1.6777216 \text{ xBZZ}
$$

#### Batch cost calculator:

<BatchCostCalc></BatchCostCalc>


## Batch Utilisation

Expand All @@ -83,7 +73,12 @@ Utilisation of an immutable batch is computed using a hash map of size $$2^{buck

As chunks are uploaded to Swarm, each chunk is assigned to a bucket based the first 16 binary digits of the [chunk's hash](/docs/learn/technology/disc#chunks). The chunk will be assigned to whichever bucket's key matches the first 16 bits of its hash, and that bucket's counter will be incremented by 1.

The batch is deemed "full" when ANY of these counters reach a certain max value. The max value is computed from the batch depth as such: $$2^{(batchDepth-bucketDepth)}$$. For example if the batch depth is 21, then the max value is $$2^{(21-16)}$$ or 32. A bucket can be thought of as have a number of "slots" equal to this maximum value, and every time the bucket's counter is incremented, one of its slots gets filled.
The batch is deemed "full" when ANY of these counters reach a certain max value. The max value is computed from the batch depth as such: $$2^{(batchDepth-bucketDepth)}$$. For example with batch depth of 24, the max value is $$2^{(24-16)}$$ or 256. A bucket can be thought of as have a number of "slots" equal to this maximum value, and every time the bucket's counter is incremented, one of its slots gets filled.


:::info
Note that 18 is below the minimum batch depth, but is used in these examples to simplify the explanation of batch utilisation.
:::

In the diagram below, the batch depth is 18, so there are $$2^{(18-16)}$$ or 4 slots for each bucket. The utilisation of a batch is simply the highest number of filled slots out of all 65536 entries or "buckets". In this batch, none of the slots in any of the buckets have yet been filled with 4 chunks, so the batch is not yet fully utilised. The most filled slots out of all buckets is 2, so the stamp batch's utilisation is 2 out of 4.

Expand Down Expand Up @@ -113,18 +108,18 @@ The default batch type when unspecified is immutable. This can be modified throu

### Implications for Swarm Users

Due to the nature of batch utilisation described above, batches are often fully utilised before reaching their theoretical maximum storage amount. However as the batch depth increases, the chance of a postage batch becoming fully utilised early decreases. At batch depth 23, there is a 0.1% chance that a batch will be fully utilised/start replacing old chunks before reaching 50% of its theoretical maximum.
Due to the nature of batch utilisation described above, batches are often fully utilised before reaching their theoretical maximum storage amount. However as the batch depth increases, the chance of a postage batch becoming fully utilised early decreases. At batch depth 24, there is a 0.1% chance that a batch will be fully utilised/start replacing old chunks before reaching 64.33% of its theoretical maximum.

Let's look at an example to make it clearer. Using the method of calculating the theoretical maximum storage amount [outlined above](/docs/learn/technology/contracts/postage-stamp#batch-depth), we can see that for a batch depth of 23, the theoretical maximum amount which can be stored is 33,554,432 kb:
Let's look at an example to make it clearer. Using the method of calculating the theoretical maximum storage amount [outlined above](/docs/learn/technology/contracts/postage-stamp#batch-depth), we can see that for a batch depth of 24, the theoretical maximum amount which can be stored is 68.72 gb:

$$
2^{23} \times \text{4kb} = \text{33,554,432 kb}
2^{24+12} = \text{68,719,476,736 bytes} = \text{68.72 gb}
$$

Therefore we should use 50% the effective rate of usage for the stamp batch:
Therefore we should use 64.33% the effective rate of usage for the stamp batch:

$$
\text{33,554,432 kb} \times{0.5} = \text{16,777,216 kb} = \text{16.77 gb}
\text{68.72 gb} \times{0.6433} = \text{44.21 gb }
$$

:::info
Expand All @@ -142,10 +137,6 @@ The provided table shows the effective volume for each batch depth from 20 to 41

| Batch Depth | Utilisation Rate | Theoretical Max Volume | Effective Volume |
|-------------|------------------|------------------|------------------------|
| 20 | 0.00% | 4.29 GB | 0.00 B |
| 21 | 0.00% | 8.59 GB | 0.00 B |
| 22 | 28.67% | 17.18 GB | 4.93 GB |
| 23 | 49.56% | 34.36 GB | 17.03 GB |
| 24 | 64.33% | 68.72 GB | 44.21 GB |
| 25 | 74.78% | 137.44 GB | 102.78 GB |
| 26 | 82.17% | 274.88 GB | 225.86 GB |
Expand Down

0 comments on commit e17ad8f

Please sign in to comment.