Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pageserver: issue concurrent IO on the read path #9353

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

VladLazar
Copy link
Contributor

@VladLazar VladLazar commented Oct 10, 2024

Problem

The read path does sequential IOs.

Summary of changes

Previously, the read path would wait for all IO in one layer visit to
complete before visiting the next layer (if a subsequent visit is
required). IO within one layer visit was also sequential.

With this PR we gain the ability to issue IO concurrently within one
layer visit and to move on to the next layer without waiting for IOs
from the previous visit to complete.

This is a slightly cleaned up version of the work done at the Lisbon hackathon.

Error handling

Before this patch set, all IO was done sequentially
for a given read. If one IO failed, then the error would stop the
processing of the read path.

Now that we are doing IO concurrently when serving a read request
it's not trivial to implement the same error handling approach.
As of this commit, one IO failure does not stop any other IO requests.
When awaiting for the IOs to complete, we stop waiting on the first
failure, but we do not signal any other pending IOs to complete and
they will just fail silently.

Long term, we need a better approach for this. Two broad ideas:

  1. Introduce some synchronization between pending IO tasks such
    that new IOs are not issued after the first failure
  2. Cancel any pending IOs when the first error is discovered

Feature Gating

One can configure this via the NEON_PAGESERVER_VALUE_RECONSTRUCT_IO_CONCURRENCY
env var. A config is possible as well, but it's more work and this is
enough for experimentation.

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
  • If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.

Checklist before merging

  • Do not forget to reformat commit message to not include the above checklist

Copy link

github-actions bot commented Oct 10, 2024

7095 tests run: 6777 passed, 0 failed, 318 skipped (full report)


Flaky tests (6)

Postgres 17

Postgres 16

Postgres 15

  • test_pgdata_import_smoke[None-1024-RelBlockSize.MULTIPLE_RELATION_SEGMENTS]: release-arm64
  • test_pgdata_import_smoke[8-1024-RelBlockSize.MULTIPLE_RELATION_SEGMENTS]: release-arm64

Postgres 14

  • test_pgdata_import_smoke[8-1024-RelBlockSize.MULTIPLE_RELATION_SEGMENTS]: release-arm64
  • test_pgdata_import_smoke[None-1024-RelBlockSize.MULTIPLE_RELATION_SEGMENTS]: release-arm64

Test coverage report is not available

The comment gets automatically updated with the latest test results
31fec1f at 2024-12-12T20:23:56.140Z :recycle:

@VladLazar VladLazar marked this pull request as ready for review October 14, 2024 14:01
@VladLazar VladLazar requested a review from a team as a code owner October 14, 2024 14:01
@VladLazar VladLazar requested review from arssher and erikgrinaker and removed request for arssher October 14, 2024 14:01
@erikgrinaker
Copy link
Contributor

As of this commit, one IO failure does not stop any other IO requests. When awaiting for the IOs to complete, we stop waiting on the first failure, but we do not signal any other pending IOs to complete and they will just fail silently.

Is this true? It really depends on how the IO futures are implemented, but in general, dropping a future should cancel the in-flight operation and stop polling it. Assuming they're implemented that way, it should be sufficient to ensure that the caller receives the error as soon as it happens and then drops the in-flight futures by returning the error. I don't think we need any synchronization beyond that, or am I missing something?

Copy link
Contributor

@erikgrinaker erikgrinaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still reviewing, just flushing some comments for now. All nits, take them or leave them.

pageserver/src/tenant/timeline.rs Outdated Show resolved Hide resolved
pageserver/src/tenant/vectored_blob_io.rs Outdated Show resolved Hide resolved
pageserver/src/tenant/storage_layer.rs Outdated Show resolved Hide resolved
pageserver/src/tenant/storage_layer.rs Outdated Show resolved Hide resolved
Comment on lines +1009 to +1016
let (tx, rx) = sync::oneshot::channel();
senders.insert((blob_meta.key, blob_meta.lsn), tx);
reconstruct_state.update_key(
&blob_meta.key,
blob_meta.lsn,
blob_meta.will_init,
rx,
);
Copy link
Contributor

@erikgrinaker erikgrinaker Oct 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the overall shape of this interaction a bit curious. Is there a compelling reason why we chose to use channels here instead of making update_key an async function and passing it an IO future for the value? That seems like a more natural approach, and would allow us to use standard async utilities to control the scheduling -- but maybe I'm missing some detail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a compelling reason why we chose to use channels here instead of making update_key an async function and passing it an IO future for the value?

update_key shouldn't wait on the completion of the IO, so I don't really understand why we would make it async.
The receiver is just pushed into VectoredValueReconstructState::on_disk_values and that vector is drained at the end of the read in VectoredValueReconstructState::collect_pending_ios.

I guess we could change VectoredValueReconstructState::on_disk_values to stash IO futures instead of oneshot channels, but I don't really see what that gets us.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update_key shouldn't wait on the completion of the IO, so I don't really understand why we would make it async.

I think I just intuitively considered it easier to understand updating the state once the read completed, which we could do by nesting futures, but I don't think it has any practical consequence. We'd probably run into some ownership issues as well.

I guess we could change VectoredValueReconstructState::on_disk_values to stash IO futures instead of oneshot channels, but I don't really see what that gets us.

I think there are a few benefits to consider:

  • Spawning a task has a cost, and spawning many tasks puts pressure on the Tokio scheduler. I don't know if this matters considering we also do IO here, but it might at scale. Futures can be awaited on the current task without spawning new ones.

  • Serial/parallel operation is trivial simply by awaiting the futures sequentially (serial) or using e.g. FuturesOrdered or FuturesUnordered to await them all concurrently on the same task (parallel) -- the latter also optimizes away polling costs.

  • Futures are automatically cancelled when dropped, while spawned tasks need explicit cancellation (you alluded to this being a problem in the PR description).

  • We avoid the additional channel synchronization and overhead. This probably doesn't matter considering we're doing IO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I just intuitively considered it easier to understand updating the state once the read completed, which we could do by nesting futures, but I don't think it has any practical consequence. We'd probably run into some ownership issues as well.

Yeah, I agree. The issue with that is that we don't get concurrency. The gist here is that we move on to the next layer after issuing IO without waiting on completion.

I think there are a few benefits to consider

That's fair. I'll give it a go when I get the time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. The issue with that is that we don't get concurrency. The gist here is that we move on to the next layer after issuing IO without waiting on completion.

I meant that these would just wrap the IO future, but not await it straight away -- we'd await the futures concurrently. The state would update as each read completed. But as I said, this might cause ownership issues (we might get away with it if we're not spawning tasks, but could get ugly either way).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave this a go. It's doable but quite tricky. The difficulty comes from the fact that we need to fan out futures from the IO future itself (which reads data for multiple keys). This would be quite inefficient currently because we have to iterate over all VectoredBlobsBuff::blobs for each blob.

I think we should defer this to when we pick this back up and benchmark it. The task and future approaches should be benchmarked separately. With the task approach we actually perform IO while the traversal is happening, but with the futures approach all IO would be done at the end.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, benchmarks seem reasonable. I'm a bit worried that putting this amount of pressure on the Tokio scheduler could cause latency issues, probably more than the gains we get from executing IOs earlier, but we can find out with appropriate benchmarks.

pageserver/src/tenant/storage_layer.rs Outdated Show resolved Hide resolved
pageserver/src/tenant/storage_layer.rs Outdated Show resolved Hide resolved
Previously, the read path would wait for all IO in one layer visit to
complete before visiting the next layer (if a subsequent visit is
required). IO within one layer visit was also sequential.

With this patch we gain the ability to issue IO concurrently within one
layer visit **and** to move on to the next layer without waiting for IOs
from the previous visit to complete.

This is a slightly cleaned up version of the work done at the Lisbon
hackathon.
It's obvious the method is unused, but let's break down error handling
of the read path. Before this patch set, all IO was done sequentially
for a given read. If one IO failed, then the error would stop the
processing of the read path.

Now that we are doing IO concurrently when serving a read request
it's not trivial to implement the same error handling approach.
As of this commit, one IO failure does not stop any other IO requests.
When awaiting for the IOs to complete, we stop waiting on the first
failure, but we do not signal any other pending IOs to complete and
they will just fail silently.

Long term, we need a better approach for this. Two broad ideas:
1. Introduce some synchronization between pending IO tasks such
that new IOs are not issued after the first failure
2. Cancel any pending IOs when the first error is discovered
Previously, each pending IO sent a stupid buffer which was just what it
read from the layer file for the key. This made the awaiter code
confusing because on disk images in layer files don't keep the enum wrapper,
but the ones in delta layers do.

This commit introduces a type to make this a bit easier and cleans up
the IO awaiting code a bit. We also avoid some rather silly serialize,
deserialize dance.
We now only store indices in the page cache.
This commit removes any caching support from the read path.
`BlobMeta::will_init` is not actually used on these code paths,
but let's be kind to future ourselves and make sure it's correct.
One can configure this via the NEON_PAGESERVER_VALUE_RECONSTRUCT_IO_CONCURRENCY
env var. A config is possible as well, but it's more work and this is
enough for experimentation.
@VladLazar VladLazar force-pushed the vlad/read-path-concurrent-io branch from 73aa1c6 to dba6968 Compare November 4, 2024 13:43
@VladLazar
Copy link
Contributor Author

As of this commit, one IO failure does not stop any other IO requests. When awaiting for the IOs to complete, we stop waiting on the first failure, but we do not signal any other pending IOs to complete and they will just fail silently.

Is this true? It really depends on how the IO futures are implemented, but in general, dropping a future should cancel the in-flight operation and stop polling it. Assuming they're implemented that way, it should be sufficient to ensure that the caller receives the error as soon as it happens and then drops the in-flight futures by returning the error. I don't think we need any synchronization beyond that, or am I missing something?

I missed this question.

All "IO futures" are collected in VectoredValueReconstructState::on_disk_values. With this PR, the read path does not wait for the outcome of one IO before issuing the next one. Hence, at the end of the layer traversal, we will have created "IO futures" for all required values. Each "IO future" is independent. If the first one fails, the others are still present.

Let's also consider what happens when all the "IO futures" after the failed one are not complete. We bail out in collect_pending_ios, but the tasks for all the incomplete IOs still run since we don't call abort on the JoinHandle or have cancellation wired in.

Copy link
Contributor

@erikgrinaker erikgrinaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done another pass to check that there aren't any issues with ordering or races, and I can't see any -- even though we dispatch IOs concurrently, we always access the results in a predetermined order.

I think this should be good to go, once we resolve the tasks vs. futures discussion above.

@problame problame self-assigned this Nov 12, 2024
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].counters.time: 0.7328
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.8850
test_throughput[release-pg16-50-pipelining_config0-5-serial-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].counters.time: 0.7545
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.9667
test_throughput[release-pg16-50-pipelining_config1-5-parallel-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.1824
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.1111
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 297.8889
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.1111
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2196
test_throughput[release-pg16-50-pipelining_config2-5-serial-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4883
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2743
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.6667
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.4444
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.6667
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3350
test_throughput[release-pg16-50-pipelining_config3-5-parallel-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4501
test_latency[release-pg16-pipelining_config0-serial-{'mode': 'serial'}].latency_mean: 0.145 ms
test_latency[release-pg16-pipelining_config0-serial-{'mode': 'serial'}].latency_percentiles.p95: 0.178 ms
test_latency[release-pg16-pipelining_config0-serial-{'mode': 'serial'}].latency_percentiles.p99: 0.199 ms
test_latency[release-pg16-pipelining_config0-serial-{'mode': 'serial'}].latency_percentiles.p99.9: 0.265 ms
test_latency[release-pg16-pipelining_config0-serial-{'mode': 'serial'}].latency_percentiles.p99.99: 0.366 ms
test_latency[release-pg16-pipelining_config1-parallel-{'mode': 'serial'}].latency_mean: 0.168 ms
test_latency[release-pg16-pipelining_config1-parallel-{'mode': 'serial'}].latency_percentiles.p95: 0.201 ms
test_latency[release-pg16-pipelining_config1-parallel-{'mode': 'serial'}].latency_percentiles.p99: 0.224 ms
test_latency[release-pg16-pipelining_config1-parallel-{'mode': 'serial'}].latency_percentiles.p99.9: 0.317 ms
test_latency[release-pg16-pipelining_config1-parallel-{'mode': 'serial'}].latency_percentiles.p99.99: 0.416 ms
test_latency[release-pg16-pipelining_config2-serial-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.149 ms
test_latency[release-pg16-pipelining_config2-serial-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.184 ms
test_latency[release-pg16-pipelining_config2-serial-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.205 ms
test_latency[release-pg16-pipelining_config2-serial-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.289 ms
test_latency[release-pg16-pipelining_config2-serial-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.359 ms
test_latency[release-pg16-pipelining_config3-parallel-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.180 ms
test_latency[release-pg16-pipelining_config3-parallel-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.219 ms
test_latency[release-pg16-pipelining_config3-parallel-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.244 ms
test_latency[release-pg16-pipelining_config3-parallel-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.341 ms
test_latency[release-pg16-pipelining_config3-parallel-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.522 ms
test_latency[release-pg16-pipelining_config4-serial-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.154 ms
test_latency[release-pg16-pipelining_config4-serial-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.189 ms
test_latency[release-pg16-pipelining_config4-serial-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.211 ms
test_latency[release-pg16-pipelining_config4-serial-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.268 ms
test_latency[release-pg16-pipelining_config4-serial-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.307 ms
test_latency[release-pg16-pipelining_config5-parallel-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.167 ms
test_latency[release-pg16-pipelining_config5-parallel-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.211 ms
test_latency[release-pg16-pipelining_config5-parallel-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.238 ms
test_latency[release-pg16-pipelining_config5-parallel-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.338 ms
test_latency[release-pg16-pipelining_config5-parallel-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.407 ms

===================================================================================================================================================================================================== 10 passed in 120.65s (0:02:00) =====================================================================================================================================================================================================
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.time: 1.0955
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 1.0250
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.time: 1.1962
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 1.2700
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2611
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.5000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.0556
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.5000
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2850
test_throughput[release-pg16-50-pipelining_config2-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4775
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3033
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.6875
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.0625
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.6875
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3075
test_throughput[release-pg16-50-pipelining_config3-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4777

==================================================================================================================================================================================================== 4 passed, 6 deselected in 38.06s ====================================================================================================================================================================================================
christian@neon-hetzner-dev-christian:[~/src/neon]: DEFAULT_PG_VERSION=16 BUILD_TYPE=release poetry run pytest --alluredir ~/tmp/alluredir --clean-alluredir test_runner/performance/pageserver/test_page_service_batching.py -k 'test_throughput' --maxfail=1

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.time: 1.0999
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 1.0450
test_throughput[release-pg16-50-pipelining_config0-5-serial-direct-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.time: 1.2530
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 1.3367
test_throughput[release-pg16-50-pipelining_config1-5-parallel-direct-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].counters.time: 1.0713
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_sum: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_batch_size_histo_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 1.0550
test_throughput[release-pg16-50-pipelining_config2-5-futures-unordered-direct-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2825
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.5882
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.1176
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.5882
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3012
test_throughput[release-pg16-50-pipelining_config3-5-serial-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4734
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3162
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.8000
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.5333
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.8000
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3227
test_throughput[release-pg16-50-pipelining_config4-5-parallel-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4442
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50.0000 MiB
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100.0000
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128.0000
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].config: 0.0000
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2842
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_sum: 6,401.7647
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_batch_size_histo_count: 298.1176
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.7647
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3135
test_throughput[release-pg16-50-pipelining_config5-5-futures-unordered-direct-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.4740

==================================================================================================================================================================================================== 6 passed, 9 deselected in 55.08s ====================================================================================================================================================================================================
…roughput_with_many_clients_one_tenant to measure latency improvement in unbatchable-pagestream but parallelizable workload (multiple layers visited)
christian@neon-hetzner-dev-christian:[~/src/neon]: NEON_ENV_BUILDER_USE_OVERLAYFS_FOR_SNAPSHOTS=true DEFAULT_PG_VERSION=16 BUILD_TYPE=release poetry run pytest --alluredir ~/tmp/alluredir --clean-alluredir test_runner/performance -k 'test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant' --maxfail=1

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 46336
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.429 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.607 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.705 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 1.138 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 2.059 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 48772
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.408 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.592 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.701 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 1.139 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 2.231 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 47102
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.422 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.605 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.705 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 1.124 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 2.029 ms

============================================================================================================================================================================================= 3 passed, 236 deselected in 102.19s (0:01:42) ==============================================================================================================================================================================================
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 47653
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.417 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.588 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.670 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 0.991 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 1.896 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.time: 20.1797
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_cpu_seconds_total: 15.1300
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1: 2209
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.4: 9683
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.8: 16795
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.16: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.32: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.64: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.128: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.256: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.512: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1024: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-serial-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.9223372036854775807: 50022
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 46773
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.425 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.632 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.837 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 1.234 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 2.347 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.time: 20.2244
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_cpu_seconds_total: 15.9300
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1: 2072
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.4: 9362
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.8: 16558
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.16: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.32: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.64: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.128: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.256: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.512: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1024: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-parallel-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.9223372036854775807: 49119
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_tenants: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pgbench_scale: 136
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.duration: 20 s
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.n_clients: 1
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.config: 0
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.page_cache_size: 134217728 byte
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.pageserver_config_override.max_file_descriptors: 500000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.request_count: 47861
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_mean: 0.416 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p95: 0.609 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99: 0.713 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.9: 1.135 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.latency_percentiles.p99.99: 1.940 ms
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.time: 20.1969
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_cpu_seconds_total: 15.7000
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1: 2282
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.4: 9597
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.8: 16926
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.16: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.32: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.64: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.128: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.256: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.512: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.1024: 50189
test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant[release-pg16-direct-futures-unordered-1-1-136-20].pageserver_max_throughput_getpage_at_latest_lsn.counters.pageserver_layers_visited_per_vectored_read_global_buckets.9223372036854775807: 50189

============================================================================================================================================================================================= 3 passed, 236 deselected in 101.20s (0:01:41) ==============================================================================================================================================================================================
@problame problame force-pushed the vlad/read-path-concurrent-io branch from bdb8a09 to 1f7f947 Compare December 12, 2024 13:51
non-package-mode-py3.10christian@neon-hetzner-dev-christian:[~/src/neon/test_runner]: poetry run python3 deep_layers_with_delta.py
@problame problame force-pushed the vlad/read-path-concurrent-io branch from c326d36 to 31fec1f Compare December 12, 2024 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants