-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Azure CI: Cache #2615
Merged
Merged
Azure CI: Cache #2615
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ax3l
force-pushed
the
ci-azureCaching
branch
9 times, most recently
from
December 2, 2021 09:21
4521c6d
to
50883ef
Compare
Cannot see a caching effect/speedup yet for Ccache; Python installs are found. Maybe worth trying again after #2556 ... Ah, found the likely reason:
|
ax3l
force-pushed
the
ci-azureCaching
branch
3 times, most recently
from
December 2, 2021 20:13
e459496
to
a09970f
Compare
ax3l
force-pushed
the
ci-azureCaching
branch
2 times, most recently
from
December 3, 2021 00:37
869fe11
to
d60b543
Compare
Only blocked by #2624 now |
Allow to use a fixed instead of a unique temporary directory. This will help `ccache` to cache compliation, because absolute paths do not change anymore between builds.
ax3l
force-pushed
the
ci-azureCaching
branch
2 times, most recently
from
December 4, 2021 02:32
f25a7a9
to
2942ca6
Compare
@RemiLehe ready for merge :) |
ax3l
force-pushed
the
ci-azureCaching
branch
3 times, most recently
from
December 4, 2021 21:42
6000bb3
to
5b61cf2
Compare
Try to use caching for as much as possible on Azure. This might help to reuse AMReX objects between our weekly updates. It might also be just way too large and get evicted quickly.
RemiLehe
approved these changes
Dec 6, 2021
ax3l
added a commit
that referenced
this pull request
Dec 10, 2021
Lol, that's not the default. We previously had `script` where it was the default. Introduced in #2615
roelof-groenewald
added a commit
to ModernElectron/WarpX
that referenced
this pull request
Dec 14, 2021
* C++17, CMake 3.17+ (ECP-WarpX#2300) * C++17, CMake 3.17+ Update C++ requirements to compile with C++17 or newer. * Superbuild: C++17 in AMReX/PICSAR/openPMD-api * Summit: `cuda/11.0.3` -> `cuda/11.3.1` When compiling AMReX in C++17 on Summit, the `cuda/11.0.3` module (`nvcc 11.0.2211`) dies with: ``` ... Base/AMReX_TinyProfiler.cpp nvcc error : 'cicc' died due to signal 11 (Invalid memory reference) nvcc error : 'cicc' core dumped ``` Although this usually is a memory issue, it also appears in `-j1` compiles. * Replace AMREX_SPACEDIM: Evolve & FieldSolver (ECP-WarpX#2642) * AMREX_SPACEDIM : Boundary Conditions * AMREX_SPACEDIM : Parallelization * Fix compilation * AMREX_SPACEDIM : Initialization * Fix Typo * space * AMREX_SPACEDIM : Particles * AMREX_SPACEDIM : Evolve and FieldSolver * C++17: structured bindings to replace "std::tie(x,y,z) = f()" (ECP-WarpX#2644) * use structured bindings * std::ignore equivalent in structured bindings Co-authored-by: Axel Huebl <[email protected]> * Perlmutter: December Update (ECP-WarpX#2645) Update the Perlmutter instructions for the major update from December 8th, 2021. * 1D tests for plasma acceleration (ECP-WarpX#2593) * modify requirements.txt and add input file for 1D Python pwfa * add 1D Python plasma acceleration test to CI * picmi version * USE_PSATD=OFF for 1D * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl <[email protected]> * Update Regression/WarpX-tests.ini Co-authored-by: Axel Huebl <[email protected]> * Cartesian1D class in pywarpx/picmi.py * requirements.txt: update picmistandard * update picmi version * requirements.txt: revert unintended changes * 1D Laser Acceleration Test * Update Examples/Physics_applications/laser_acceleration/inputs_1d Co-authored-by: Axel Huebl <[email protected]> * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl <[email protected]> * add data_list to PICMI laser_acceleration test * increase max steps and fix bug in pywarpx/picmi.py 1DCartesian moving window direction * add data_lust to Python laser acceleration test * picmistandard update Co-authored-by: Prabhat Kumar <[email protected]> Co-authored-by: Axel Huebl <[email protected]> * CMake 3.22+: Policy CMP0127 (ECP-WarpX#2648) Fix a warning with CMake 3.22+. We use simple syntax in cmake_dependent_option, so we are compatible with the extended syntax in CMake 3.22+: https://cmake.org/cmake/help/v3.22/policy/CMP0127.html * run_test.sh: Own virtual env (ECP-WarpX#2653) Isolate builds locally, so we don't overwrite a developer's setup anymore. This also avoids a couple of nifty problems that can occur by mixing those envs. Originally part of ECP-WarpX#2556 * GNUmake: Fix Python Install (force) (ECP-WarpX#2655) Local developers and cached CI installs ddi never install `pywarpx` if and old version existed. The `--force` must be with us. * Add: Regression/requirements.txt Forgotten in ECP-WarpX#2653 * Azure: `set -eu -o pipefail` Lol, that's not the default. We previously had `script` where it was the default. Introduced in ECP-WarpX#2615 * GNUmake & `WarpX-test.ini`: `python` -> `python3` Consistent with all other calls to Python in tests. * Fix missing checksums1d (ECP-WarpX#2657) * Docs: Fix missing Checksum Ref * Checksum: LaserAcceleration_1d * Checksum: Python_PlasmaAcceleration_1d * Regression/requirements.txt: openpmd-api Follow-up to 8f93e01 * Azure: pre-install `setuptools` upgrade Might fix: ``` - installing setuptools_scm using the system package manager to ensure consistency - migrating from the deprecated setup_requires mechanism to pep517/518 and using a pyproject.toml to declare build dependencies which are reliably pre-installed before running the build tools warnings.warn( TEST FAILED: /home/vsts/.local/lib/python3.8/site-packages/ does NOT support .pth files You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/vsts/.local/lib/python3.8/site-packages/ and your PYTHONPATH environment variable currently contains: '' Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: https://setuptools.readthedocs.io/en/latest/easy_install.html#custom-installation-locations Please make the appropriate changes for your system and try again. ``` * GNUmake `installwarpx`: `mv` -> `cp` No reason to rebuild. Make will detect dependency when needed. * Python GNUmake: Remove Prefix Hacks FREEEEDOM. venv power. * Azure: Ensure latest venv installed * Python/setup.py: picmistandard==0.0.18 Forgotten in ECP-WarpX#2593 * Fix: analysis_default_regression.py Mismatched checksum file due to crude hard-coding. * PWFA 1D: Fix output name Hard coded, undocumented convention: turns out this must be the name of the test that we define in the ini file. Logical, isn't it. Not. Follow-up to ECP-WarpX#2593 * Docs: `python3 -m pip` & Virtual Env (ECP-WarpX#2656) * Docs: `python3 -m pip` Use `python3 -m pip`: - works independent of PATH - always uses the right Python - is the recommended way to use `pip` * Dependencies: Python incl. venv Backported from ECP-WarpX#2556. Follow-up to ECP-WarpX#2653 * CMake: 3.18+ (ECP-WarpX#2651) With the C++17 switch, we required CMake 3.17+ since that one introduced the `cuda_std_17` target compile feature. It turns out that one of the many CUDA improvements in CMake 3.18+ is also to fix that feature for good, so we bump our requirement in CMake. Since CMake is easy to install, it's easier to require a clean newer version than working around a broken old one. Spotted first by Phil on AWS instances, thx! * fix check for absolute library install path (ECP-WarpX#2646) Co-authored-by: Hannes T <[email protected]> * use if constexpr to replace template specialization (ECP-WarpX#2660) * fix for setting the boundary condition potentials in 1D ES simulations (ECP-WarpX#2649) * `use_default_v_<galilean,comoving>` Only w/ Boosted Frame (ECP-WarpX#2654) * ICC CI: Unbound Vars (`setvars.sh`) (ECP-WarpX#2663) Ignore: ``` /opt/intel/oneapi/compiler/latest/env/vars.sh: line 236: OCL_ICD_FILENAMES: unbound variable ``` * QED openPMD Tests: Specify H5 Backend (ECP-WarpX#2661) We default to ADIOS `.bp` if available. Thus, specify HDF5 assumption * C++17: if constexpr for templates in ShapeFactors (ECP-WarpX#2659) * use if constexpr to replace template specialization * Rmove Interface Annotations * Replace static_assert with amrex::Abort * Add includes & authors Co-authored-by: Axel Huebl <[email protected]> * ABLASTR Library (ECP-WarpX#2263) * [Draft] ABLASTR Library - CMake object library - include FFTW wrappers to start with * Add: MPIInitHelpers * Enable ABLASTR-only builds * Add alias WarpX::ablastr * ABLASTR: openPMD forwarding * make_third_party_includes_system: Avoid Collision * WarpX: depend on `ablastr` * Definitions: WarpX -> ablastr * CMake: Reduce build objects for ABLASTR Skip all object files that we do not use in builds. * CMake: app/shared links all object targets Our `PRIVATE` source/objects are not PUBLICly propagated themselves. * Docs: Fix Warning Logger Typo (ECP-WarpX#2667) * Python: Add 3.10, Relax upper bound (ECP-WarpX#2664) There are no breaking changes in Python 3.10 that affect us. Giving the version compatibility of Python and it's ABI stability, there is no need at the moment to provide an upper limit. Thus, relaxed now in general. * Fixing the initialization of the EB data in ghost cells (ECP-WarpX#2635) * Using ng_FieldSolver ghost cells in the EB data * Removed an unused variable * Fixed makeEBFabFactory also in in WarpXRgrid.cpp * Fixed end of line whitespace * Undoing ECP-WarpX#2607 * Add PML Support for multi-J Algorithm (ECP-WarpX#2603) * Add PML Support for multi-J Algorithm * Add CI Test * Fix the scope of profiler for SYCL (ECP-WarpX#2668) In main.cpp, the destructor of the profiler was called after amrex::Finalize. This caused an error in SYCL due to a device synchronization call in the dtor, because the SYCL queues in amrex had been deleted. In this commit, we limit the scope of the profiler so that its destructor is called before the queues are deleted. Note that it was never an issue for CUDA/HIP, because the device synchronization calls in those backends do not need any amrex objects. * Add high energy asymptotic fit for Proton-Boron total cross section (ECP-WarpX#2408) * Add high energy asymptotic fit for Proton Boron total cross section * Write keV and MeV instead of kev and mev * Add @return doxystrings * Add anisotropic mesh refinement example (ECP-WarpX#2650) * Add anisotropic mesh refinement example * Update benchmark * AMReX/PICSAR: Weekly Update (ECP-WarpX#2666) * AMReX: Weekly Update * Reset: PEC_particle, RepellingParticles, subcyclingMR New AMReX grid layout routines split grids until they truly reach number of MPI ranks, if blocking factor allows. This changes some of our particle orders slightly. * Add load balancing test (ECP-WarpX#2561) * Added embedded_circle test * Add embedded_circle test files * Removed diag files * removed PICMI input file * Update to use default regression analysis * Added line breaks for spacing Co-authored-by: Axel Huebl <[email protected]> * Added description * Fixed benchmark file * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * Updated checksum * Added embedded_circle test * Add embedded_circle test files * removed PICMI input file * Update to use default regression analysis * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * added analysis.py file in order to relax tolerance on test * Ensure that timers are used to update load balancing algorithm * Updated test name retrieval Co-authored-by: Axel Huebl <[email protected]> Co-authored-by: Roelof <[email protected]> Co-authored-by: Roelof Groenewald <[email protected]> * Adding EB multifabs to the Python interface (ECP-WarpX#2647) * Adding edge_lengths and face_areas to the Python interface * Added wrappers for the two new arrays of data * Adding a CI test * Fixed test name * Added customRunCmd * Added mpi in test * Refactor DepositCharge so it can be called from ImpactX. (ECP-WarpX#2652) * Refactor DepositCharge so it can be called from ImpactX. * change thread_num * Fix namespace * remove all static WarpX:: members and methods from DepositChargeDoIt. * fix unused * Don't access ref_ratio unless lev != depos_lev * more unused * remove function to its own file / namespace * don't need a CMakeLists.txt for this * lower case namespace, rename file * Refactor: Profiler Wrapper Explicit control for synchronization instead of global state. Co-authored-by: Axel Huebl <[email protected]> * ABLASTR: Fix Doxygen in `DepositCharge` * update version number and changelog Co-authored-by: Axel Huebl <[email protected]> Co-authored-by: Prabhat Kumar <[email protected]> Co-authored-by: Luca Fedeli <[email protected]> Co-authored-by: Prabhat Kumar <[email protected]> Co-authored-by: s9105947 <[email protected]> Co-authored-by: Hannes T <[email protected]> Co-authored-by: Edoardo Zoni <[email protected]> Co-authored-by: Phil Miller <[email protected]> Co-authored-by: Lorenzo Giacomel <[email protected]> Co-authored-by: Weiqun Zhang <[email protected]> Co-authored-by: Neïl Zaim <[email protected]> Co-authored-by: Remi Lehe <[email protected]> Co-authored-by: Kevin Z. Zhu <[email protected]> Co-authored-by: Andrew Myers <[email protected]>
lgiacome
pushed a commit
to lgiacome/WarpX
that referenced
this pull request
Dec 16, 2021
* run_test: WARPX_CI_TMP Allow to use a fixed instead of a unique temporary directory. This will help `ccache` to cache compliation, because absolute paths do not change anymore between builds. * Azure CI: Cache Try to use caching for as much as possible on Azure. This might help to reuse AMReX objects between our weekly updates. It might also be just way too large and get evicted quickly.
lgiacome
pushed a commit
to lgiacome/WarpX
that referenced
this pull request
Dec 16, 2021
Lol, that's not the default. We previously had `script` where it was the default. Introduced in ECP-WarpX#2615
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Try to use caching for as much as possible on Azure.
This might help to reuse AMReX objects between our weekly updates.
It might also be just way too large and get evicted quickly.
Docs: