Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging Casey's Thesis Work #265

Open
wants to merge 80 commits into
base: devel
Choose a base branch
from

Conversation

csdechant
Copy link
Collaborator

This PR directly addresses and should close Issue #257. This PR is based on @cticenhour thesis-rebase branch, excluding $H_{\phi}$ objects. The main focus of this PR includes:

  • The includes of a new material object, FieldSolverMaterial, which allows uses to supply the electric field terms in the drift-diffusion equation as a electrostatic potential variable or a vector electric field variable.
  • A material object to calculate the plasma dielectric coefficient.

This PR also includes updates based on @csdechant ZapdosAD-plus (which was rebased early on into the thesis-rebase branch), the excluding FV objects. That work includes:

  • Kernel objects for the electron energy flux using a thermal conductivity and an effected electric field calculation for ions.
  • Boundary conditions objects using an effected electric field calculation for ions and a dielectric time integral boundary condition.
  • Tests based on the method of manufactured solutions (MMS).
  • Minor bug fixes to the Newton-Shooting method acceleration objects.

@csdechant
Copy link
Collaborator Author

@cticenhour this PR has a lot of commit to begin with. I am in favor of squashing all the commit into one, but since this PR is based on your old branch, I wanted to get your opinion first.

@csdechant
Copy link
Collaborator Author

@cticenhour I am fixing the Precheck errors and one of them is a "banned keywords" error. The PlasmaDielectricConstant material object uses std::cout. I am suggest that I just comment out the section of code and leave a NOTE: comment stating that users can uncomment this section of code for debugging purposes. What are your thoughts?

@csdechant csdechant force-pushed the Casey-Thesis-No-HPhi-Objects branch from 7fa241a to a8a7474 Compare November 22, 2024 23:03
@moosebuild
Copy link
Collaborator

Job Precheck, step Clang format on 7014a11 wanted to post the following:

Your code requires style changes.

A patch was auto generated and copied here
You can directly apply the patch by running, in the top level of your repository:

curl -s https://mooseframework.inl.gov/zapdos/docs/PRs/265/clang_format/style.patch | git apply -v

Alternatively, with your repository up to date and in the top level of your repository:

git clang-format 402e17523a5ef51b0dac40c3c8175cd976b6ce48

@moosebuild
Copy link
Collaborator

moosebuild commented Nov 22, 2024

Job Documentation, step Sync to remote on c17657f wanted to post the following:

View the site here

This comment will be updated on new commits.

@csdechant
Copy link
Collaborator Author

@cticenhour @gsgall This PR is ready for review. The same as for the other active PR, #263, @gsgall is planned to be the primary reviewer (since this PR involved Casey's and my work). There are two issues I want to point out:

  • Depending on which PR is merged first (this one or Updating Doc String: Address Issue #258 #263), the other PR will most likely need to be rebased due to replacing _grad_potential with _electric_field.
  • There is a test fail due to a Exodiff for tutorial/tutorial05-PlasmaWaterInterface. In particular, the difference is with the Current_OHm and EFieldx1 auxvariables with a relative difference of 3.46855e-05 and 1.67417e-05, respectively. This error only occurs with CIVET and I have not been able to replicated with my Mac (Intel processor). I am in favor of increasing the ceiling threshold for these variables, but I wanted you guys input first.

cticenhour and others added 20 commits November 27, 2024 15:11
- Allow for usage with Steady executioner.
- Also fixes electron momentum transfer frequency unit typo (Hz to 
rad/s).
- input parameters were made consistent with issue shannon-lab#223
- secondary electron emmision coefficients were made material and species dependent
- secondary electron energy was moved to a BC input parameters
- several test inputs were updated in order to facilitate this as well
- made all member variables that can be const, const for the BCs
- also moved any variable declaration in a member function to be a member variable
- all of the secondary electron emmision coefficients were also made material properties
…jects

Due to the git history involving a mixture of edits between the HPhiCylindricalPlasma and PlasmaDielectricConstant kernel objects and the PlasmaDielectricConstant material object, there is no clear breaking point the only includes the PlasmaDielectricConstant material object without substantial re-work. For this reason, the kernel objects will be removed from this branch and worked in the based branch until further notice.
Removing the FaradayCurrentBC object since it belongs to the HPhiCylindricalPlasma set of objects. These objects will be work on in a seperate branch.
AddPeriodicControllers and Shooting Method Kernels were updated in previous commits
(868f397 and 2c20476, respectively)
which resulted in changes in the tests that involve these objects. This commit addresses
those test changes.
@gsgall
Copy link
Collaborator

gsgall commented Nov 27, 2024

@csdechant Is the testing strategy of this PR similar to the larger physics based testing used for the majority Zapdos testing? Or is it more math based, focusing on the simplest tests possible for each kernel, i.e having a single kernel in an input whenever possible.

@gsgall All new test are math based (MMS to be exact) and in my option, all new tests in Zapdos should be against some type of known solution (either analytical or MMS) and any new test based on validation should be on a case-by-case basis. For the new MMS test, they are designed to test problems of increase coupling (i.e. one for diffusion-only, next with advection-diffusion with a function potential, next with advection-diffusion with variable potential, etc.).

The reason for the large gold files is that these MMS tests use solutions designed for an RF discharge, so the test are for one RF cycle. The current tests looks at every time step, so the gold files are big (in honesty, I could probably increase the step size too, as the original step size was chosen to be significantly small enough not to interfere with spatial convergence studies).

@csdechant Is there a reason we cannot just a steady state mms test for each Kernel individually? Testing with increasing coupling over an entire RF cycle seems like it is overkill for testing each Kernel and a bit redundant. I don't really see a need to have these tests be a transient simulation if the Kernel is not a time derivative. If we guarantee that each kernel works as expected on its own we should be able to guarantee the expected behaviour of our system, since MOOSE takes care of using multiple kernels in one system. I would like to advocate for a simpler testing approach. A single steady state test MMS test for each kernel where each test uses only as many objects as strictly necessary for testing a specific kernel, ideally a single kernel per input, if possible. I would also like to see our testing convention be something closer to the MOOSE convection of separating the tests by what type of object we are testing (Kernel, AuxKernel, ..., etc.).

The other concern, along with the size of gold files, I have with these tests is the amount of time it takes to run the test suite. Checking out the CI, testing, https://civet.inl.gov/job/2565410/, it shows that this test suite takes a little over 7 minutes to run. It also shows an average test time of 16 seconds, even when running the tests in parallel. I really think we should be aiming to decrease this to somewhere around a few seconds for each test, unless there is a specific reason we need these extended tests.

@csdechant csdechant force-pushed the Casey-Thesis-No-HPhi-Objects branch from c37b2a6 to c17657f Compare November 27, 2024 22:57
@csdechant
Copy link
Collaborator Author

@csdechant Is there a reason we cannot just a steady state mms test for each Kernel individually? Testing with increasing coupling over an entire RF cycle seems like it is overkill for testing each Kernel and a bit redundant. I don't really see a need to have these tests be a transient simulation if the Kernel is not a time derivative. If we guarantee that each kernel works as expected on its own we should be able to guarantee the expected behaviour of our system, since MOOSE takes care of using multiple kernels in one system.

These MMS test were to verify Zapdos for the uses of RF discharges, so the manufactured solution should represent a simple, yet similar, solution of a RF discharge. The reason for that is one should not assume that since two separate parts of codes work as intended, that the coupled version will work and shouldn't be tested. This is similar to the idea between unit vs integration testing. While MOOSE combines the residuals and handles the Jacobians of multiple kernels, that does not ensure that the coupled model will behave as expected (just an example, what if a developer coded a variable or material as non-AD when it should be AD? The non-couple test will show no error, as there is no coupling of variables, and MOOSE doesn't know it should be AD, MOOSE will just provide an incorrect Jacobian).

I would like to advocate for a simpler testing approach. A single steady state test MMS test for each kernel where each test uses only as many objects as strictly necessary for testing a specific kernel, ideally a single kernel per input, if possible. I would also like to see our testing convention be something closer to the MOOSE convection of separating the tests by what type of object we are testing (Kernel, AuxKernel, ..., etc.).

I agree and disagree with this to a point (maybe it is just wording). My view is similar, as Zapdos should have more simple cases (similar to the MOOSE framework), but also my rigid coupled cases (similar to other MOOSE modules, such as the Navier Stokes module). My vision for the Zapdos testbed is as such:

  • Have a type of "unit test" system that test just one kernel at a time (This is not a "true" unit test, as these are not testing single functions). These would be strictly non-heavy tests.
  • Have a type of integration test system that test the coupling of multiple kernels and actions to verify the intended models of Zapdos. This would be a mixture of non-heavy and heavy tests, depending on run time. These should be coarse version of more rigid verification tests.
  • Have a system of verification tests. These are full convergence study test that compare the CSV files output and plot convergence slopes. These are marked as heavy, by default, and will generate verification/convergences plots for the Zapdos website (similar to TMAP8).
  • Finally, have a system of validation tests. Similar to the verifications test, these test will be marked as heavy, by default, and if need will be marked as hpc-heavy. These tests too will generate plots for the Zapdos website (similar to TMAP8 validation tests).

The other concern, along with the size of gold files, I have with these tests is the amount of time it takes to run the test suite. Checking out the CI, testing, https://civet.inl.gov/job/2565410/, it shows that this test suite takes a little over 7 minutes to run. It also shows an average test time of 16 seconds, even when running the tests in parallel. I really think we should be aiming to decrease this to somewhere around a few seconds for each test, unless there is a specific reason we need these extended tests.

I just push a commit to reduce the gold files and run time. Also, the new MMS test are marked as heavy, so they will only run if a user inputs ./run_tests -jn --heavy. It seems CIVET for Zapdos runs all the test together instead as a separate Test Heavy, like for MOOSE and MOOSE module PRs. @cticenhour Is there a setting to separate regular vs heavy test, so it doesn't seem like develops are submitting longer test without heavy = True?

@gsgall Please let me know if you have any question or different options as what I have stated above. @cticenhour We talked about this a little offline about revamping the Zapdos testbed, in regards to Issue #261. Please let my know if you have any comments or concerns about the about vision for the Zapdos testbed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants