Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FS test problems #19

Closed
Lewih opened this issue Mar 15, 2022 · 3 comments · Fixed by #33
Closed

FS test problems #19

Lewih opened this issue Mar 15, 2022 · 3 comments · Fixed by #33
Assignees

Comments

@Lewih
Copy link
Collaborator

Lewih commented Mar 15, 2022

  • Mostly permissions are set to 777, not to 755 for /users /data /scratch. Broaden the test.
  • Leuven has no default /scratch/leuven for external users? not automated (as in antwerp) ?
  • Mount test on scratch should target site specific folder according to execution site.
@stevenvdb
Copy link
Contributor

stevenvdb commented Mar 17, 2022

  • The FS test checks existence of /scratch/antwerp, /scratch/brussel /scratch/gent, /scratch/leuven, but not all those directories exist in Leuven or Ghent. Only the site-specific one exists.
  • At Leuven, we do not create a scratch directory automatically for all users
  • The FS test submits a separate job for each directory that is checked. For me, this seems much. In the long term, such an approach will lead to thousands of jobs submitted to run the suite.
  • On the leuven login nodes, VSC_SCRATCH_NODE is set to a non-existing directory as it is not intended to be used there. Inside a job, the value is unique for each job. So currently, the tests for VSC_SCRATCH_NODE fail. They would pass if you would only check that the variable is set (without specifying to which value) and optionally check that the directory is writable by the user for the test that runs inside a job.

@smoors
Copy link
Collaborator

smoors commented Mar 18, 2022

  • The FS test submits a separate job for each directory that is checked. For me, this seems much. In the long term, such an approach will lead to thousands of jobs submitted to run the suite.

good point. we should try to combine many small jobs into one. this can be done by adding multiple @sanity_functions to a single test that performs multiple commands.
the same can be done for the system tools and environment variable checks.

apparently ReFrame does not allow multiple @sanity_functions in a single class.

the alternative is to use multiple assert_xxx(yyy, msg=zzz) functions in a single @sanity_function, adding the optional msg parameter to specify which check failed in the error message.

@Lewih
Copy link
Collaborator Author

Lewih commented Mar 18, 2022

@smoors @stevenvdb
This can be done, but it is advisable to be able to produce a more meaningful output in the error stream in order to understand which part of the test fails before implementing this feature, otherwise a deep inspection of the stage folder is necessary after every failure (even partial as a single variable).

apparently ReFrame does not allow multiple @sanity_functions in a single class.

Yes

the alternative is to use multiple assert_xxx(yyy, msg=zzz) functions in a single @sanity_function, adding the optional msg parameter to specify which check failed in the error message.

With a better output the only reframe log should be enough to inspect cue test results and eventually the tests could be collapsed in a single, more verbose, job/test.
Let's open a separate issue for this. #23

Must be said this is not as reframe is it intended to work.

@Lewih Lewih linked a pull request Jun 25, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants