-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test_pgdata_import_smoke
timeouts
#10071
Comments
Typical runtimes are ~210s |
Timeout is always during the last We give it a basebackup successfully:
Endpoint logs don't say much:
|
The test's runtime is dominated by the simple postgres query that sums a bunch of integers. The test intentionally bloats out the data to take a lot of pages (~300k), and the reads are not batched because tests run with effective_io_concurrency=1, so we end up CPU bottlenecked on the pageserver, making this test rather sensitive to background CPU load. |
…fective_io_concurrency=2` in tests by default (#10114) ## Problem `test_pgdata_import_smoke` writes two gigabytes of pages and then reads them back serially. This is CPU bottlenecked and results in a long runtime, and sensitivity to CPU load from other tests on the same machine. Closes: #10071 ## Summary of changes - Use effective_io_concurrency=32 when doing sequential scans through 2GiB of pages in test_pgdata_import_smoke. This is a ~10x runtime decrease in the parts of the test that do sequential scans. - Also set `effective_io_concurrency=2` for tests, as I noticed while debugging that we were doing all getpage requests serially, which is bad for checking the stability of the batching code.
https://neon-github-public-dev.s3.amazonaws.com/reports/main/12253951655/index.html#testresult/3359205196681dba/retries
The text was updated successfully, but these errors were encountered: