-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add regression tests for a cloud-based Neon instance #8681
Conversation
5029 tests run: 4865 passed, 0 failed, 164 skipped (full report)Flaky tests (3)Postgres 16
Postgres 14
Code coverage* (full report)
* collected from Rust tests only The comment gets automatically updated with the latest test results
0bbe5c4 at 2024-09-24T06:52:44.248Z :recycle: |
Change from the step to the env variable Co-authored-by: Alexander Bayandin <[email protected]>
This is a pre requisite for #8681
This is a pre requisite for #8681
WalkthroughThe changes introduce a new GitHub Actions workflow for running daily regression tests on a PostgreSQL database, along with a set of regression tests implemented using Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Outside diff range and nitpick comments (5)
.github/workflows/cloud-regress.yml (3)
34-34
: Add a comment explaining the use of the pinned imageAs suggested in a previous review, it would be helpful to add a comment explaining why the
pinned
build-tools image is used. This enhances the maintainability of the workflow.Consider adding a comment like this:
# We use the pinned build-tools image to ensure consistency across runs # and to avoid storage over-consumption. For more details, see: # https://github.com/neondatabase/neon/blob/main/.github/workflows/pin-build-tools-image.yml image: neondatabase/build-tools:pinned
37-45
: LGTM: Checkout and patch steps, but add a comment for the patchThe checkout and patch steps are well-structured:
- Checking out submodules ensures all necessary code is available.
- The patching step applies custom modifications to the PostgreSQL source.
Consider adding a brief comment explaining the purpose of the patch:
- name: Patch the test run: | # Apply custom modifications for cloud environment compatibility cd "vendor/postgres-v${DEFAULT_PG_VERSION}" patch -p1 < "../../patches/cloud_regress_pg${DEFAULT_PG_VERSION}.patch"
84-84
: Consider using a dedicated project for regression testsThe current setup uses a shared project for running regression tests, as indicated by the
BENCHMARK_CONNSTR
secret. This approach might lead to potential conflicts or inconsistent results if the shared project is modified by other processes.Consider creating a dedicated project for these regression tests to ensure isolation and consistency. This may involve:
- Creating a new secret for a dedicated test project connection string.
- Implementing logic to create and tear down the test project for each run.
- Updating the workflow to use the new dedicated project.
This change would enhance the reliability and reproducibility of your regression tests.
test_runner/cloud_regress/test_cloud_regress.py (2)
86-86
: Simplify port retrieval usingdict.get()
You can simplify the retrieval of the port value by using the
get()
method with a default value, making the code more concise and readable.Apply this change:
"PGPORT": str( - remote_pg.default_options["port"] if "port" in remote_pg.default_options else 5432 + remote_pg.default_options.get("port", 5432) ),
75-77
: Enhance function docstring for clarityThe docstring for
test_cloud_regress
is minimal. Provide a more detailed description to clarify the purpose and behavior of the test function.Update the docstring:
""" - Run the regression tests + Run the PostgreSQL regression tests against a cloud-based Neon instance. + This test sets up the environment and executes the regression suite + using the pg_regress tool to ensure compatibility with the cloud instance. """
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (3)
- .github/workflows/cloud-regress.yml (1 hunks)
- test_runner/cloud_regress/test_cloud_regress.py (1 hunks)
- test_runner/fixtures/utils.py (1 hunks)
Additional comments not posted (8)
.github/workflows/cloud-regress.yml (3)
1-11
: LGTM: Well-configured workflow triggersThe workflow trigger configuration is well-structured:
- Daily scheduled run at an off-peak hour (1:45 AM UTC) is suitable for regression testing.
- Manual trigger option (
workflow_dispatch
) allows for on-demand testing.- Clear comments explaining the cron syntax enhance maintainability.
13-20
: LGTM: Robust default and concurrency settingsThe default run configuration and concurrency settings are well-defined:
- Bash shell with
-euxo pipefail
ensures strict error handling, which is crucial for identifying issues quickly.- Concurrency configuration prevents multiple workflow runs from interfering with each other, maintaining test integrity.
69-85
: LGTM: Well-structured test execution setupThe test execution setup is well-organized:
- Custom actions for artifact download and test execution promote modularity.
- The use of environment variables and secrets ensures proper configuration.
test_runner/fixtures/utils.py (2)
239-239
: LGTM: Enhanced regex pattern for regression test outputsThe change to
ATTACHMENT_NAME_REGEX
now includes "regression.out" as a valid file name pattern, in addition to the existing "regression.diffs". This modification allows for capturing an additional type of regression test output, which aligns with the PR objective of enabling regression tests for a cloud-based Neon instance.The rest of the pattern remains unchanged, maintaining compatibility with existing file types. This localized change enhances the flexibility of the attachment naming convention without disrupting the existing functionality.
239-239
: Verify generation and inclusion of "regression.out" filesThe change to
ATTACHMENT_NAME_REGEX
allows for the inclusion of "regression.out" files in the Allure report. To ensure this change is fully effective:
- Verify that the systems running the regression tests are now generating "regression.out" files when appropriate.
- Confirm that these "regression.out" files are being correctly attached to the Allure report by the
allure_attach_from_dir
function.This verification will ensure that the new file type is being properly utilized in the testing process.
test_runner/cloud_regress/test_cloud_regress.py (3)
41-45
: Use safe SQL query construction for subscription operationsInterpolating
sub[0]
directly into SQL statements using f-strings can lead to SQL injection vulnerabilities ifsub[0]
contains malicious content. Use thepsycopg2.sql
module to safely construct SQL queries with identifiers.[security]
Modify the code to use
psycopg2.sql
for safer query construction:+ from psycopg2 import sql for sub in subscriptions: - regress_cur.execute(f"ALTER SUBSCRIPTION {sub[0]} DISABLE") + regress_cur.execute(sql.SQL("ALTER SUBSCRIPTION {} DISABLE").format(sql.Identifier(sub[0]))) - regress_cur.execute(f"ALTER SUBSCRIPTION {sub[0]} SET (slot_name = NONE)") + regress_cur.execute(sql.SQL("ALTER SUBSCRIPTION {} SET (slot_name = NONE)").format(sql.Identifier(sub[0]))) - regress_cur.execute(f"DROP SUBSCRIPTION {sub[0]}") + regress_cur.execute(sql.SQL("DROP SUBSCRIPTION {}").format(sql.Identifier(sub[0]))) regress_conn.commit()
61-61
: Use parameterized queries when dropping rolesDirectly interpolating
role
into SQL statements can pose a security risk ifrole
contains unexpected characters. Utilize thepsycopg2.sql
module to safely include identifiers in your SQL queries.[security]
Update the code to safely construct the SQL statement:
+ from psycopg2 import sql for role in roles: - cur.execute(f"DROP ROLE {role}") + cur.execute(sql.SQL("DROP ROLE {}").format(sql.Identifier(role)))
58-59
: Avoid logging potentially sensitive informationLogging role names might expose sensitive information. Consider omitting or anonymizing the role names in the logs to enhance security.
[security]
Modify the log statement:
- log.info("Role found: %s", role[0]) + log.info("Extra role found and will be dropped.")
Problem
We need to be able to run the regression tests against a cloud-based Neon instance in order to prepare the migration to the arm architecture.
Summary of changes
Some tests were modified to work on the cloud instance (i.e. added passwords, server-side copy changed to client-side, etc)
Checklist before requesting a review
Checklist before merging