Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: WIP: MITM proxy between pageserver and compute for fault testing #10026

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

hlinnaka
Copy link
Contributor

@hlinnaka hlinnaka commented Dec 5, 2024

This doesn't do much yet, but provides the basic man-in-the-middle proxy that can be used to inject faults in the compute <-> compute communication. I will use this to write test various scenarios, to stress the error handling and retry logic in the compute.

Copy link

github-actions bot commented Dec 5, 2024

6410 tests run: 6130 passed, 6 failed, 274 skipped (full report)


Failures on Postgres 17

Failures on Postgres 16

Failures on Postgres 14

# Run all failed tests locally:
scripts/pytest -vv -n $(nproc) -k "test_compute_pageserver_connection_stress2[release-pg14] or test_compute_pageserver_connection_stress2[release-pg16] or test_compute_pageserver_connection_stress2[release-pg17] or test_compute_pageserver_connection_stress2[debug-pg17] or test_compute_pageserver_connection_stress2[release-pg17] or test_compute_pageserver_connection_stress2[release-pg17]"
Flaky tests (10)

Postgres 17

Postgres 16

Postgres 15

Postgres 14

Test coverage report is not available

The comment gets automatically updated with the latest test results
328408b at 2024-12-05T21:35:50.141Z :recycle:

log.info("proxy shutting down")

def launch_server_in_thread(self):
t1 = threading.Thread(target=asyncio.run, args=self.run_server)
Copy link
Contributor

@vadim2404 vadim2404 Dec 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that it's the right mix. It will not create different event loops, and you will fighting for the same resource (also, taking into account GIL in Python, the thread here is useless). You can just run a group of tasks (here is an example of such way to do that)

import asyncio


async def x(i):
    print(i)


async def group():
    tasks = [x(i) for i in range(10)]
    await asyncio.gather(*tasks)


if __name__ == '__main__':
    asyncio.run(group())

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even in your case you can just asyncio.run(self.run_server()), why do you need the python thread?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants