-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compute_ctl: Streamline and Pipeline startup SQL #9717
Conversation
Before, compute_ctl didn't have a good registry for what command would run when, depending exclusively on sync code to apply changes. When users have many databases/roles to manage, this step can take a substantial amount of time, breaking assumptions about low (re)start times in other systems. This commit reduces the time compute_ctl takes to restart when changes must be applied, by making all commands more or less blind writes, and applying these commands in an asynchronous context, only waiting for completion once we know the commands have all been sent. Additionally, this reduces time spent by batching per-database operations where previously we would create a new SQL connection for every user-database operation we planned to execute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really nice. Any benchmarks?
5499 tests run: 5273 passed, 0 failed, 226 skipped (full report)Flaky tests (3)Postgres 15
Postgres 14
Code coverage* (full report)
* collected from Rust tests only The comment gets automatically updated with the latest test results
32ea5b3 at 2024-11-19T12:09:59.228Z :recycle: |
Is there a control plane aspect here? Not obvious to me if so. |
Not that I'm aware of. I'm betting you got the ping due to the compute spec reading code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have general comment not directly related with this PR and which may be should be addressed separately. Right now we applying all this statements concurrently with user statements. In other words, at the moment of executing this statements, database already accepts user's connections and so can execute user's queries.
It may lead to some unexpected and confusing behaviour: for example user may observe old version of Neon extension, because it was not yet upgraded. I actually reproduced such problem myself in one of the tests.
So I wonder if we could (should?) somehow prevent user from accessing this database instance before compute_ctl completes node update? Not sure what is the bestway tp do it:
- lock some catalog table
- configure different port
- somehow block connections at proxy level
...
Don't we only start allowing connections from outside the container after we've transitioned to the Running state? During startup we're the sole user of the database, right? At least, that's what's been diagnosed as causing the For
I think that's more related to |
The previous iteration always connected to every database, which slowed down startup when most databases had no operations scheduled.
Addresses @ tristan957's comment, and allows separate review of these queries.
Instead of Mutex, using an RwLock on shared modifyable state helps speed up read-only parallel steps (i.e. most parallel steps). This makes the per-database jobs actually work in parallel, rather than serialize on accessing the database- and roles tables. Additionally, this patch reduces the number of parallel workers to 1/3rd of max_connections, from max_connections - 10. With the previous setting I consistently hit issues with "ERROR: no unpinned buffers available", now it works like a charm.
Performace reviewNo-op sync of 1024 roles and 1024 databases, on my WSL box (16vCPU, enough RAM/SSD):
Config
First I let the system apply the changes, then shut the database down. Finally, measure start time with nothing to be done. Future workMany of the modifying queries generated for the system database (e.g. role- and database creation, deletions) can be parallellized within their respective phases, too. Maybe we can look into that at a later date, as creating 1k databases sequentially takes a lot of time (and are thus likely to hit |
@clipperhouse could you check this PR and approve it if/when you determine this doesn't produce any issues for Control Plane? |
Any API contract changes here, from the perspective of control plane as caller or callee? My read is that we should mostly see faster responses, and possibly better concurrency? cc’ing @mtyazici to have a look. |
AFAIS, this PR doesn't change any behavior with cplane -> compute communication. So, I don't think it needs our review. Please let us know if otherwise cc @MMeent |
The files touched involve CPlane API functionality, thus are marked with CodeOwners as requiring team-CPlane's review before it can be merged. |
## Problem We used `set_path()` to replace the database name in the connection string. It automatically does url-safe encoding if the path is not already encoded, but it does it as per the URL standard, which assumes that tabs can be safely removed from the path without changing the meaning of the URL. See, e.g., https://url.spec.whatwg.org/#concept-basic-url-parser. It also breaks for DBs with properly %-encoded names, like with `%20`, as they are kept intact, but actually should be escaped. Yet, this is not true for Postgres, where it's completely valid to have trailing tabs in the database name. I think this is the PR that caused this regression #9717, as it switched from `postgres::config::Config` back to `set_path()`. This was fixed a while ago already [1], btw, I just haven't added a test to catch this regression back then :( ## Summary of changes This commit changes the code back to use `postgres/tokio_postgres::Config` everywhere. While on it, also do some changes around, as I had to touch this code: 1. Bump some logging from `debug` to `info` in the spec apply path. We do not use `debug` in prod, and it was tricky to understand what was going on with this bug in prod. 2. Refactor configuration concurrency calculation code so it was reusable. Yet, still keep `1` in the case of reconfiguration. The database can be actively used at this moment, so we cannot guarantee that there will be enough spare connection slots, and the underlying code won't handle connection errors properly. 3. Simplify the installed extensions code. It was spawning a blocking task inside async function, which doesn't make much sense. Instead, just have a main sync function and call it with `spawn_blocking` in the API code -- the only place we need it to be async. 4. Add regression python test to cover this and related problems in the future. Also, add more extensive testing of schema dump and DBs and roles listing API. [1]: 4d1e48f [2]: https://www.postgresql.org/message-id/flat/20151023003445.931.91267%40wrigleys.postgresql.org Resolves neondatabase/cloud#20869
## Problem We used `set_path()` to replace the database name in the connection string. It automatically does url-safe encoding if the path is not already encoded, but it does it as per the URL standard, which assumes that tabs can be safely removed from the path without changing the meaning of the URL. See, e.g., https://url.spec.whatwg.org/#concept-basic-url-parser. It also breaks for DBs with properly %-encoded names, like with `%20`, as they are kept intact, but actually should be escaped. Yet, this is not true for Postgres, where it's completely valid to have trailing tabs in the database name. I think this is the PR that caused this regression #9717, as it switched from `postgres::config::Config` back to `set_path()`. This was fixed a while ago already [1], btw, I just haven't added a test to catch this regression back then :( ## Summary of changes This commit changes the code back to use `postgres/tokio_postgres::Config` everywhere. While on it, also do some changes around, as I had to touch this code: 1. Bump some logging from `debug` to `info` in the spec apply path. We do not use `debug` in prod, and it was tricky to understand what was going on with this bug in prod. 2. Refactor configuration concurrency calculation code so it was reusable. Yet, still keep `1` in the case of reconfiguration. The database can be actively used at this moment, so we cannot guarantee that there will be enough spare connection slots, and the underlying code won't handle connection errors properly. 3. Simplify the installed extensions code. It was spawning a blocking task inside async function, which doesn't make much sense. Instead, just have a main sync function and call it with `spawn_blocking` in the API code -- the only place we need it to be async. 4. Add regression python test to cover this and related problems in the future. Also, add more extensive testing of schema dump and DBs and roles listing API. [1]: 4d1e48f [2]: https://www.postgresql.org/message-id/flat/20151023003445.931.91267%40wrigleys.postgresql.org Resolves neondatabase/cloud#20869
Before, compute_ctl didn't have a good registry for what command would run when, depending exclusively on sync code to apply changes. When users have many databases/roles to manage, this step can take a substantial amount of time, breaking assumptions about low (re)start times in other systems.
This commit reduces the time compute_ctl takes to restart when changes must be applied, by making all commands more or less blind writes, and applying these commands in an asynchronous context, only waiting for completion once we know the commands have all been sent.
Additionally, this reduces time spent by batching per-database operations where previously we would create a new SQL connection for every user-database operation we planned to execute.
Problem
Performance of starting compute with 100s of users and 100s of databases is quite suboptimal. This is one way to reduce the pain.
Summary of changes
async
to apply the SQL changes built in the above systemcompute_ctl
even further to batch all operations done in each database, so that we don't needlessly (re)connect to the same database over and over when a large amount of roles was deleted.Note for CP reviewer: I don't expect much to have changed on your part, as it is mostly data flow changes in the tool itself, with no definition changes on the Compute Spec side.
Fixes https://github.com/neondatabase/cloud/issues/20461
Checklist before requesting a review
Checklist before merging