Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tokio-epoll-uring: use it on the layer-creating code paths #6378

Merged
merged 66 commits into from
Mar 5, 2024

Conversation

problame
Copy link
Contributor

@problame problame commented Jan 17, 2024

part of #6663
See that epic for more context & related commits.

Problem

Before this PR, the layer-file-creating code paths were using VirtualFile, but under the hood these were still blocking system calls.

Generally this meant we'd stall the executor thread, unless the caller "knew" and used the following pattern instead:

spawn_blocking(|| {
    Handle::block_on(async {
        VirtualFile::....().await;
    })
}).await

Solution

This PR adopts tokio-epoll-uring on the layer-file-creating code paths in pageserver.

Note that on-demand downloads still use tokio::fs, these will be converted in a future PR.

Design: Avoiding Regressions With std-fs

If we make the VirtualFile write path truly async using tokio-epoll-uring, should we then remove the spawn_blocking + Handle::block_on usage upstack in the same commit?

No, because if we’re still using the std-fs io engine, we’d then block the executor in those places where previously we were protecting us from that through the spawn_blocking .

So, if we want to see benefits from tokio-epoll-uring on the write path while also preserving the ability to switch between tokio-epoll-uring and std-fs , where std-fs will behave identical to what we have now, we need to conditionally use spawn_blocking + Handle::block_on .

I.e., in the places where we use that know, we’ll need to make that conditional based on the currently configured io engine.

It boils down to investigating all the places where we do spawn_blocking(... block_on(... VirtualFile::...)).

Detailed write-up of that investigation in Notion, made publicly accessible.

tl;dr: Preceding PRs addressed the relevant call sites:

NB: once we are switched over to tokio-epoll-uring everywhere in production, we can deprecate std-fs; to keep macOS support, we can use tokio::fs instead. That will remove this whole headache.

Code Changes In This PR

  • VirtualFile API changes
    • VirtualFile::write_at
      • implement an ioengine operation and switch VirtualFile::write_at to it
    • VirtualFile::metadata()
      • curiously, we only use it from the layer writers' finish() methods
      • introduce a wrapper Metadata enum because std::fs::Metadata cannot be constructed by code outside rust std
    • VirtualFile::sync_all() and for completeness sake, add VirtualFile::sync_data()

Testing & Rollout

Before merging this PR, we ran the CI with both io engines.

Additionally, the changes will soak in staging.

We could have a feature gate / add a new io engine tokio-epoll-uring-write-path to do a gradual rollout. However, that's not part of this PR.

Future Work

There's still some use of std::fs and/or tokio::fs for directory namespace operations, e.g. std::fs::rename.

We're not addressing those in this PR, as we'll need to add the support in tokio-epoll-uring first. Note that rename itself is usually fast if the directory is in the kernel dentry cache, and only the fsync after rename is slow. These fsyncs are using tokio-epoll-uring, so, the impact should be small.

Copy link

github-actions bot commented Jan 18, 2024

2561 tests run: 2427 passed, 0 failed, 134 skipped (full report)


Flaky tests (1)

Postgres 15

Code coverage* (full report)

  • functions: 28.8% (6959 of 24199 functions)
  • lines: 47.2% (42603 of 90201 lines)

* collected from Rust tests only


The comment gets automatically updated with the latest test results
bdd5228 at 2024-03-05T09:19:50.873Z :recycle:

@problame problame force-pushed the problame/integrate-tokio-epoll-uring/wip branch 4 times, most recently from a8c9912 to 7c92164 Compare January 25, 2024 13:19
Base automatically changed from problame/integrate-tokio-epoll-uring/wip to main January 26, 2024 08:25
@problame problame added run-benchmarks Indicates to the CI that benchmarks should be run for PR marked with this label run-extra-build-* When placed on a PR, tells the CI to run all extra builds labels Feb 5, 2024
@problame problame force-pushed the problame/integrate-tokio-epoll-uring/more-ops branch from 9bd0b1c to 0548374 Compare February 5, 2024 14:22
problame added 5 commits March 1, 2024 15:25
…kio-epoll-uring/layer-write-path-fsync-cleanups
…sync-cleanups' into problame/integrate-tokio-epoll-uring/create-layer-fatal-err-on-fsync
…-err-on-fsync' into problame/integrate-tokio-epoll-uring/ioengine-par-fsync
…' into problame/integrate-tokio-epoll-uring/more-ops
@problame problame requested review from jcsp and VladLazar March 1, 2024 15:46
@problame problame marked this pull request as ready for review March 1, 2024 15:46
@problame problame changed the base branch from main to problame/integrate-tokio-epoll-uring/ioengine-par-fsync March 1, 2024 15:46
Copy link
Member

@koivunej koivunej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me this is looking good.

Base automatically changed from problame/integrate-tokio-epoll-uring/ioengine-par-fsync to main March 4, 2024 13:31
problame added a commit that referenced this pull request Mar 4, 2024
…ync_all()` (#6986)

Except for the involvement of the VirtualFile fd cache, this is
equivalent to what happened before at runtime.

Future PR #6378 will implement
`VirtualFile::sync_all()` using
tokio-epoll-uring if that's configured as the io engine.
This PR is preliminary work for that.

part of #6663
@problame problame changed the title tokio-epoll-uring integration: cover the layer-creating code paths tokio-epoll-uring: use it on the layer-creating code paths Mar 4, 2024
@problame problame enabled auto-merge (squash) March 4, 2024 19:26
@problame problame merged commit 3da410c into main Mar 5, 2024
58 checks passed
@problame problame deleted the problame/integrate-tokio-epoll-uring/more-ops branch March 5, 2024 09:03
problame added a commit that referenced this pull request Mar 12, 2024
problame added a commit that referenced this pull request Mar 12, 2024
problame added a commit that referenced this pull request Mar 13, 2024
…ode paths (#6378)"

Unchanged

test_bulk_insert[neon-release-pg14-tokio-epoll-uring].wal_written: 345 MB
test_bulk_insert[neon-release-pg14-tokio-epoll-uring].wal_recovery: 9.194 s

This reverts commit 3da410c.
problame added a commit that referenced this pull request Mar 13, 2024
problame added a commit that referenced this pull request Mar 15, 2024
refs #7136

Problem
-------

Before this PR, we were using `tokio_epoll_uring::thread_local_system()`,
which panics on tokio_epoll_uring::System::launch() failure

As we've learned in [the
past](#6373 (comment)),
some older Linux kernels account io_uring instances as locked memory.

And while we've raised the limit in prod considerably, we did hit it
once on 2024-03-11 16:30 UTC.
That was after we enabled tokio-epoll-uring fleet-wide, but before
we had shipped release-5090 (c6ed86d)
which did away with the last mass-creation of tokio-epoll-uring
instances as per

    commit 3da410c
    Author: Christian Schwarz <christian@neon.tech>
    Date:   Tue Mar 5 10:03:54 2024 +0100

        tokio-epoll-uring: use it on the layer-creating code paths (#6378)

Nonetheless, it highlighted that panicking in this situation is probably
not ideal, as it can leave the pageserver process in a semi-broken state.

Further, due to low sampling rate of Prometheus metrics, we don't know
much about the circumstances of this failure instance.

Solution
--------

This PR implements a custom thread_local_system() that is pageserver-aware
and will do the following on failure:
- dump relevant stats to `tracing!`, hopefully they will be useful to
  understand the circumstances better
- if it's the locked memory failure (or any other ENOMEM): abort() the
  process
- if it's ENOMEM, retry with exponential back-off, capped at 3s.
- add metric counters so we can create an alert

This makes sense in the production environment where we know that
_usually_, there's ample locked memory allowance available, and we know
the failure rate is rare.
problame added a commit that referenced this pull request Mar 15, 2024
refs #7136

Problem
-------

Before this PR, we were using
`tokio_epoll_uring::thread_local_system()`,
which panics on tokio_epoll_uring::System::launch() failure

As we've learned in [the

past](#6373 (comment)),
some older Linux kernels account io_uring instances as locked memory.

And while we've raised the limit in prod considerably, we did hit it
once on 2024-03-11 16:30 UTC.
That was after we enabled tokio-epoll-uring fleet-wide, but before
we had shipped release-5090 (c6ed86d)
which did away with the last mass-creation of tokio-epoll-uring
instances as per

    commit 3da410c
    Author: Christian Schwarz <christian@neon.tech>
    Date:   Tue Mar 5 10:03:54 2024 +0100

tokio-epoll-uring: use it on the layer-creating code paths (#6378)

Nonetheless, it highlighted that panicking in this situation is probably
not ideal, as it can leave the pageserver process in a semi-broken
state.

Further, due to low sampling rate of Prometheus metrics, we don't know
much about the circumstances of this failure instance.

Solution
--------

This PR implements a custom thread_local_system() that is
pageserver-aware
and will do the following on failure:
- dump relevant stats to `tracing!`, hopefully they will be useful to
  understand the circumstances better
- if it's the locked memory failure (or any other ENOMEM): abort() the
  process
- if it's ENOMEM, retry with exponential back-off, capped at 3s.
- add metric counters so we can create an alert

This makes sense in the production environment where we know that
_usually_, there's ample locked memory allowance available, and we know
the failure rate is rare.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-benchmarks Indicates to the CI that benchmarks should be run for PR marked with this label run-extra-build-* When placed on a PR, tells the CI to run all extra builds
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants