Skip to content

Commit

Permalink
changelog: rewrite entry about new options for the datastore
Browse files Browse the repository at this point in the history
  • Loading branch information
hsanjuan committed Dec 10, 2024
1 parent 512f468 commit b4b028c
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 8 deletions.
20 changes: 13 additions & 7 deletions docs/changelogs/v0.33.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,25 @@ If you depended on removed ones, please fill an issue to add them to the upstrea

Onboarding files and directories with `ipfs add --to-files` now requires non-empty names. due to this, The `--to-files` and `--wrap` options are now mutually exclusive ([#10612](https://github.com/ipfs/kubo/issues/10612)).

#### New `Datastore.BlockKeyCacheSize` option and write-through blockstore/blockservice
#### New datastore options for faster writes: `WriteThrough`, `BlockKeyCacheSize`

Now that Kubo supports [`pebble`](../datastores.md#pebbleds) as a datastore backend, it becomes very useful to expose some additional configuration options for how the blockservice/blockstore/datastore combo behaves.

Usually, LSM-tree based datastore like Pebble or Badger have very fast write performance (blocks are streamed to disk) while incurring in read-amplification penalties (blocks need to be looked up in the index to know where they are on disk). Prior to this version, `BlockService` and `Blockstore` implementations performed a `Has(cid)` for every block that was going to be written, skipping the writes altogether if the block was already present in the datastore.

The performance impact of this `Has()` call can vary. The `Datastore` implementation might include block-caching and things like bloom-filters to speed up lookups and mitigate read-penalties. Our `Blockstore` implementation also includes a bloom-filter (controlled by `BloomFilterSize`, and disabled by default), and a two-queue cache for keys and block sizes. If we assume that most of the blocks added to Kubo are new blocks, not already present in the datastore, or that the datastore itself includes mechanisms to optimize writes and avoid writing the same data twice, the calls to `Has()` at both BlockService and Blockstore layers seem superflous and we have seen it harm performance when importing large amounts of data.

For these reasons, from now on, the default is to use "write through" implementation of Blockservice/Blockstore. We have added a new option `Datastore.WriteThrough`, which defaults to `true`. Previous behaviour can be obtained by switching it to `false`.

We have additionally made the size of the two-queue blockstore cache with another option: `Datastore.BlockKeyCacheSize` which defaults to `65536` (64KiB). This option does not appear on the configuration by default, but it can be set manually and also allows to disable this caching layer by setting it to `0`.

This option controls the size of a blockstore caching layer that records whether the blockstore has certain block and their sizes (not the contents). This was previously an internal option. It is set by default to 64KiB.
This caching layer can be disabled by setting it to `0`. This option is similar to the existing `BloomFilterSize`, which creates another bloom-filter-based wrapper on the blockstore.

Additionally, setting `BloomFilterSize` to `0` will now trigger the use of "writethrough" blockservice and blockstores. Usually, the blockservice and the blockstore perform an `Has()` call on the underlying datastore before writing any block. If the datastore already has the block, they can skip the operation. In the "writethrough" mode, this does not happen. The reasoning is:
As a reminder, details from all the options are explained in the [configuration documentation](../config.md).

* If the bloom filter is disabled, at least the first `Has()` call will hit the datastore. Some datastore incurr a large penalty for `Has()` (i.e. flatfs must open folders and do listing, S3 will need a call upstream...).
* Some datastore like Pebble already include their own bloom filter and caching layers we should not duplicate such layers on top.
* Some datastores are very fast to write but incurr in read-amplification. Calling `Has()` for every block reduces data ingest performance when batching multiple blocks.
* Calling `Has()` can cause eviction of blocks from read-caches when writing.
We recommend users trying Pebble as a datastore backend to disable both blockstore bloom-filter and key caching layers and enable write through as a way to evaluate the raw performance of the underlying datastore, which includes its own bloom-filter and caching layers (default cache size is `8MiB` and can be configured in the [options](../datastores.md#pebbleds).

For users trying Pebble as a datastore backend, they will be usually better off disabling all Kubo's key-caching and bloom filter, and thus enabling direct writes to Pebble. In general, we want to give users the freedom to play with these settings and find what works best for their workloads and backends.

#### 📦️ Dependency updates

Expand Down
29 changes: 28 additions & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ config file at runtime.
- [`Datastore.GCPeriod`](#datastoregcperiod)
- [`Datastore.HashOnRead`](#datastorehashonread)
- [`Datastore.BloomFilterSize`](#datastorebloomfiltersize)
- [`Datastore.WriteTrhough`](#datastorewritethrough)
- [`Datastore.BlockKeyCacheSize`](#datastoreblockkeycachesize)
- [`Datastore.Spec`](#datastorespec)
- [`Discovery`](#discovery)
- [`Discovery.MDNS`](#discoverymdns)
Expand Down Expand Up @@ -629,18 +631,43 @@ we'd want to use 1199120 bytes. As of writing, [7 hash
functions](https://github.com/ipfs/go-ipfs-blockstore/blob/547442836ade055cc114b562a3cc193d4e57c884/caching.go#L22)
are used, so the constant `k` is 7 in the formula.

Enabling the BloomFilter can provide performance improvements specially when
responding to many requests for inexistant blocks. It however requires a full
sweep of all the datastore keys on daemon start. On very large datastores this
can be a very taxing operation, particulary if the datastore does not support
querying existing keys without reading their values at the same time (blocks).

Default: `0` (disabled)

Type: `integer` (non-negative, bytes)

### `Datastore.WriteThrough`

This option controls whether a block that already exist in the datastore
should be written to it. When set to `false`, a `Has()` call is performed
against the datastore prior to writing every block. If the block is already
stored, the write is skipped. This check happens both on the Blockservice and
the Blockstore layers and this setting affects both.

When set to `true`, no checks are performed and blocks are written to the
datastore, which depending on the implementation may perform its own checks.

This option can affect performance and the strategy should be taken in
conjunction with [`BlockKeyCacheSize`](#datastoreblockkeycachesize) and
[`BloomFilterSize`](#datastoreboomfiltersize`).

Default: `true`

Type: `bool`

### `Datastore.BlockKeyCacheSize`

A number representing the maximum size in bytes of the blockstore's Two-Queue
cache, which caches block-cids and their block-sizes. Use `0` to disable.

This cache, once primed, can greatly speed up operations like `ipfs repo stat`
as there is no need to read full blocks to know their sizes. Size should be
adjusted depending on the number of CIDs on disk.
adjusted depending on the number of CIDs on disk (`NumObjects in `ipfs repo stat`).

Default: `65536` (64KiB)

Expand Down

0 comments on commit b4b028c

Please sign in to comment.