Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HIP implementation #43

Merged
merged 38 commits into from
Jan 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
88ab61f
Update blt to v0.4.1
rcarson3 Feb 7, 2022
fc72b85
update blt to develop
rcarson3 Feb 8, 2022
b7c31f5
Initial HIP implementation that compiles
rcarson3 Feb 8, 2022
778775f
Remove temp fix due to hip branches of ecmech being behind develop
rcarson3 Feb 10, 2022
5775e2f
Fix at least a few small bugs related to execution strategy not being…
rcarson3 Feb 10, 2022
faff5ba
Fix logic/memory bug in a post-processing variable
rcarson3 Feb 14, 2022
316182b
Start making use of MFEM's native NLF Ext class
rcarson3 Mar 24, 2022
d24a3f7
get rid of a double free
rcarson3 Apr 4, 2022
99d1216
Merge remote-tracking branch 'ghssh/exaconstit-dev' into exaconstit-hip
rcarson3 Jul 6, 2022
abf5ff4
update blt to v0.5.1
rcarson3 Jul 6, 2022
9b05abb
Fix issues with hipsparse and mfem includes
rcarson3 Sep 8, 2022
d87fa53
Changes related to MFEM v4.5 update
rcarson3 Oct 31, 2022
c437d5a
HIP memory type default to 64B for host
rcarson3 Nov 30, 2022
5abf6e1
Add extra hip library for batch blas calls
rcarson3 Dec 16, 2022
70253ed
Merge branch 'exaconstit-dev' into exaconstit-hip
rcarson3 Mar 3, 2023
ddff32a
Merge branch 'exaconstit-dev' into exaconstit-hip
rcarson3 Mar 3, 2023
42c54d5
update blt to v0.5.3
rcarson3 Aug 14, 2023
1232222
Update ExaConstit to be less CUDA specific
rcarson3 Aug 14, 2023
d69dc7f
Merge branch 'exaconstit-dev' into exaconstit-hip
rcarson3 Aug 14, 2023
0e03e75
update workflows to not use CUDA/HIP rtmodels but new GPU rtmodel
rcarson3 Aug 14, 2023
01ea9e1
A bug-fix related to building with CUDA
rcarson3 Aug 17, 2023
403afaa
Adding much needed checks to ensure files used in the option files exist
rcarson3 Aug 17, 2023
7313a1e
Finally fix broken test suite on macs and windows...
rcarson3 Aug 17, 2023
1382eca
Add some additional checks for radical entk runs in our job creation …
rcarson3 Aug 31, 2023
a1e9424
tick the version number to v0.7.0
rcarson3 Aug 31, 2023
c049c83
A few modifications to simplify some of the GPU related code and pote…
rcarson3 Jan 3, 2024
782a2bc
Fix bad renaming in prev commit
rcarson3 Jan 3, 2024
2ea8d5f
Merge branch 'exaconstit-dev' into exaconstit-hip
rcarson3 Jan 3, 2024
e41615c
rebaseline voce_ea_cs test data to go with some previously made mfem …
rcarson3 Jan 4, 2024
bb9606b
Fix some compiler warnings
rcarson3 Jan 4, 2024
22a9a55
Fix a build issue on some systems when building RAJA with an out of s…
rcarson3 Jan 4, 2024
da2a4f9
Add check to only care if grain file doesn't exist if auto mesh is used
rcarson3 Jan 4, 2024
10cae06
update .github CI to latest trial 1
rcarson3 Jan 4, 2024
819f1a7
update .github CI to latest trial 2 if this doesn't work will punt to…
rcarson3 Jan 4, 2024
a30ffbd
[squash merge] Various trials to fix the github CI...
rcarson3 Jan 4, 2024
8311875
Update the install scripts note haven't tested locally but should wor…
rcarson3 Jan 4, 2024
fa232b7
Update the README to include various changes
rcarson3 Jan 4, 2024
a371bbf
Update options.toml to note that GPU is the rtmodel option instead of…
rcarson3 Jan 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build-exaconstit/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ runs:

cmake ../ -DENABLE_MPI=ON -DENABLE_FORTRAN=ON \
-DMFEM_DIR=${{ inputs.mfem-dir }} \
-DRAJA_DIR=${{ inputs.raja-dir }} \
-DRAJA_DIR=${{ inputs.raja-dir }}/ \
-DECMECH_DIR=${{ inputs.ecmech-dir }} \
-DSNLS_DIR=${{ inputs.snls-dir }} \
-DCMAKE_BUILD_TYPE=Release \
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/build-hypre/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ inputs:
hypre-url:
description: 'URL where to look for Hypre'
required: false
default: 'https://github.com/hypre-space/hypre/archive'
default: 'https://github.com/hypre-space/hypre/archive/'
hypre-archive:
description: 'Archive to download'
required: true
Expand All @@ -17,7 +17,7 @@ runs:
steps:
- name: Install Hypre
run: |
wget --no-verbose ${{ inputs.hypre-url }}/${{ inputs.hypre-archive }};
wget --no-verbose ${{ inputs.hypre-url }}/refs/tags/${{ inputs.hypre-archive }};
ls;
rm -rf ${{ inputs.hypre-dir }};
tar -xzf ${{ inputs.hypre-archive }};
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build-raja/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ runs:
steps:
- name: Install RAJA
run: |
git clone --single-branch --branch v0.13.0 --depth 1 ${{ inputs.raja-repo }} ${{ inputs.raja-dir }};
git clone --single-branch --branch v2022.10.5 --depth 1 ${{ inputs.raja-repo }} ${{ inputs.raja-dir }};
cd ${{ inputs.raja-dir }};
git submodule init;
git submodule update;
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ on:
# Note the SNLS top dir is no longer where SNLS's source is located within ecmech
# rather it's the top directory of ecmech.
env:
HYPRE_ARCHIVE: v2.18.2.tar.gz
HYPRE_TOP_DIR: hypre-2.18.2
HYPRE_ARCHIVE: v2.26.0.tar.gz
HYPRE_TOP_DIR: hypre-2.26.0
METIS_ARCHIVE: metis-5.1.0.tar.gz
METIS_TOP_DIR: metis-5.1.0
MFEM_TOP_DIR: mfem-exaconstit
Expand Down Expand Up @@ -71,7 +71,7 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ env.RAJA_TOP_DIR }}
key: ${{ runner.os }}-build-${{ env.RAJA_TOP_DIR }}-v2
key: ${{ runner.os }}-build-${{ env.RAJA_TOP_DIR }}-v2.01

- name: get raja
if: matrix.mpi == 'parallel' && steps.raja-cache.outputs.cache-hit != 'true'
Expand All @@ -87,14 +87,14 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ env.ECMECH_TOP_DIR }}
key: ${{ runner.os }}-build-${{ env.ECMECH_TOP_DIR }}-v2
key: ${{ runner.os }}-build-${{ env.ECMECH_TOP_DIR }}-v2.01

- name: get ecmech
if: matrix.mpi == 'parallel' && steps.ecmech-cache.outputs.cache-hit != 'true'
uses: ./.github/workflows/build-ecmech
with:
ecmech-dir: ${{ env.ECMECH_TOP_DIR }}
raja-dir: '${{ github.workspace }}/${{ env.RAJA_TOP_DIR}}/install_dir/share/raja/cmake/'
raja-dir: '${{ github.workspace }}/${{ env.RAJA_TOP_DIR}}/install_dir/lib/cmake/raja/'

# Get Hypre through cache, or build it.
# Install will only run on cache miss.
Expand All @@ -104,7 +104,7 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ env.HYPRE_TOP_DIR }}
key: ${{ runner.os }}-build-${{ env.HYPRE_TOP_DIR }}-v2
key: ${{ runner.os }}-build-${{ env.HYPRE_TOP_DIR }}-v2.01

- name: get hypre
if: matrix.mpi == 'parallel' && steps.hypre-cache.outputs.cache-hit != 'true'
Expand Down Expand Up @@ -139,7 +139,7 @@ jobs:
uses: actions/cache@v2
with:
path: ${{ env.MFEM_TOP_DIR }}
key: ${{ runner.os }}-build-${{ env.MFEM_TOP_DIR }}-v2.02
key: ${{ runner.os }}-build-${{ env.MFEM_TOP_DIR }}-v2.03

- name: install mfem
if: matrix.mpi == 'parallel' && steps.mfem-cache.outputs.cache-hit != 'true'
Expand All @@ -154,7 +154,7 @@ jobs:
- name: build
uses: ./.github/workflows/build-exaconstit
with:
raja-dir: '${{ github.workspace }}/${{ env.RAJA_TOP_DIR}}/install_dir/share/raja/cmake/'
raja-dir: '${{ github.workspace }}/${{ env.RAJA_TOP_DIR}}/install_dir/lib/cmake/raja/'
mfem-dir: '${{ github.workspace }}/${{ env.MFEM_TOP_DIR }}/install_dir/lib/cmake/mfem/'
ecmech-dir: '${{ github.workspace }}/${{ env.ECMECH_TOP_DIR }}/install_dir/'
snls-dir: '${{ github.workspace }}/${{ env.SNLS_TOP_DIR }}/install_dir/'
Expand Down
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ endif()

enable_language(C)

set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

Expand Down
21 changes: 15 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Updated: June. 10, 2022

Version 0.6.0
Version 0.7.0

# Description:
A principal purpose of this code app is to probe the deformation response of polycrystalline materials; for example, in homogenization to obtain bulk constitutive properties of metals. This is a nonlinear quasi-static, implicit solid mechanics code built on the MFEM library based on an updated Lagrangian formulation (velocity based).
Expand All @@ -13,7 +13,7 @@ On the material modelling front of things, ExaConstit can easily handle various

Through the ExaCMech library, we are able to offer a range of crystal plasticity models that can run on the GPU. The current models that are available are a power law slip kinetic model with both nonlinear and linear variations of a voce hardening law for BCC and FCC materials, and a single Kocks-Mecking dislocation density hardening model with balanced thermally activated slip kinetics with phonon drag effects for BCC, FCC, and HCP materials. Any future model types to the current list are a simple addition within ExaConstit, but they will need to be implemented within ExaCMech. Given the templated structure of ExaCMech, some additions would be comparatively straightforward.

The code is capable of running on the GPU by making use of either a partial assembly formulation (no global matrix formed) or element assembly (only element assembly formed) of our typical FEM code. These methods currently only implement a simple matrix-free jacobi preconditioner. The MFEM team is currently working on other matrix-free preconditioners.
The code is capable of running on the GPU by making use of either a partial assembly formulation (no global matrix formed) or element assembly (only element assembly formed) of our typical FEM code. These methods currently only implement a simple matrix-free jacobi preconditioner. The MFEM team is currently working on other matrix-free preconditioners. Additionally, ExaConstit can be built to run with either CUDA or HIP-support in-order to run on most GPU-capable machines out there.

The code supports constant time steps, user-supplied variable time steps, or automatically calculated time steps. Boundary conditions are supplied for the velocity field on a surface. The code supports a number of different preconditioned Krylov iterative solvers (PCG, GMRES, MINRES) for either symmetric or nonsymmetric positive-definite systems. We also support either a newton raphson or newton raphson with a line search for the nonlinear solve. We might eventually look into supporting a nonlinear solver such as L-BFGS as well.

Expand Down Expand Up @@ -50,19 +50,28 @@ Several small examples that you can run are found in the ```test/data``` directo

The ```scripts/postprocessing``` directory contains several useful post-processing tools. The ```macro_stress_strain_plot.py``` file can be used to generate macroscopic stress strain plots. An example script ```adios2_example.py``` is provided as example for how to make use of the ```ADIOS2``` post-processing files if ```MFEM``` was compiled with ```ADIOS2``` support. It's highly recommended to install ```MFEM``` with this library if you plan to be doing a lot of post-processing of data in python.

A set of scripts to perform lattice strain calculations similar to those found in powder diffraction type experiments can be found in the ```scripts/postprocessing``` directory. The appropriate python scripts are: `adios2_extraction.py`, `strain_Xtal_to_Sample.py`, and `calc_lattice_strain.py`. In order to use these scripts, one needs to run with the `light_up=true` option set in the `Visualization` table of your simulation option file.

# Workflow Examples

We've provided several different useful workflows in the `workflows` directory. One is an optimization set of scripts that makes use of a genetic algorithm to optimize material parameters based on experimental results. Internally, it makes use of either a simple workflow manager for something like a workstation or it can leverage the python bindings to the Flux job queue manager created initially by LLNL to run on large HPC systems.

The other workflow is based on a UQ workflow for metal additive manufacturing that was developed as part of the ExaAM project. You can view the open short workshop paper for an overview of the ExaAM project's workflow and the results https://doi.org/10.1145/3624062.3624103 . This workflow connects microstructures provided by an outside code such as LLNL's ExaCA code (https://github.com/LLNL/ExaCA) or other sources such as nf-HEDM methods to local properties to be used by a part scale application code. The goal here is to utilize ExaConstit to run a ton of simulations rather than experiments in order to obtain data that can be used to parameterize macroscopic material models such as an anisotropic yield surface.

# Installing Notes:

* git clone the LLNL BLT library into cmake directory. It can be obtained at https://github.com/LLNL/blt.git
* MFEM will need to be built with hypre v2.18.2 - v2.20.*; metis5; RAJA; and optionally Conduit, ADIOS2, or ZLIB.
* MFEM will need to be built with hypre v2.26.0-v2.30.0; metis5; RAJA v2022.x+; and optionally Conduit, ADIOS2, or ZLIB.
* Conduit and ADIOS2 supply output support. ZLIB allows MFEM to read in gzip mesh files or save data as being compressed.
* You'll need to use the exaconstit-dev branch of MFEM found on this fork of MFEM: https://github.com/rcarson3/mfem.git
* We do plan on upstreaming the necessary changes needed for ExaConstit into the master branch of MFEM, so you'll no longer be required to do this
* Version 0.7.0 of Exaconstit is compatible with the following mfem hash 78a95570971c5278d6838461da6b66950baea641
* Version 0.6.0 of ExaConstit is compatible with the following mfem hash 1b31e07cbdc564442a18cfca2c8d5a4b037613f0
* Version 0.5.0 of ExaConstit required 5ebca1fc463484117c0070a530855f8cbc4d619e
* ExaCMech is required for ExaConstit to be built and can be obtained at https://github.com/LLNL/ExaCMech.git and now requires the develop branch. ExaCMech depends internally on SNLS, from https://github.com/LLNL/SNLS.git.
* ExaCMech is required for ExaConstit to be built and can be obtained at https://github.com/LLNL/ExaCMech.git and now requires the develop branch. ExaCMech depends internally on SNLS, from https://github.com/LLNL/SNLS.git. We depend on v0.3.4 of ExaCMech as of this point in time.
* For versions of ExaCMech >= 0.3.3, you'll need to add `-DENABLE_SNLS_V03=ON` to the cmake commands as a number of cmake changes were made to that library and SNLS.
* RAJA is required for ExaConstit to be built and should be the same one that ExaCMech and MFEM are built with. It can be obtained at https://github.com/LLNL/RAJA. Currently, RAJA >= v0.13.0 is required for ExaConstit due to a dependency update in MFEMv4.3.
* An example install bash script for unix systems can be found in ```scripts/install/unix_install_example.sh```. This is provided as an example of how to install ExaConstit and its dependencies, but it is not guaranteed to work on every system. A CUDA version of that script is also included in that folder, and only minor modifications are required if using a version of Cmake >= 3.18.*. In those cases ```CUDA_ARCH``` has been changed to ```CMAKE_CUDA_ARCHITECTURES```. You'll also need to look up what you're CUDA architecture compute capability is set to and modify that within the script. Currently, it is set to ```sm_70``` which is associated with the Volta architecture.
* RAJA is required for ExaConstit to be built and should be the same one that ExaCMech and MFEM are built with. It can be obtained at https://github.com/LLNL/RAJA. Currently, RAJA >= 2022.10.x is required for ExaConstit due to a dependency update in MFEMv4.5.
* An example install bash script for unix systems can be found in ```scripts/install/unix_install_example.sh```. This is provided as an example of how to install ExaConstit and its dependencies, but it is not guaranteed to work on every system. A CUDA version of that script is also included in that folder, and only minor modifications are required if using a version of Cmake >= 3.18.*. In those cases ```CUDA_ARCH``` has been changed to ```CMAKE_CUDA_ARCHITECTURES```. You'll also need to look up what you're CUDA architecture compute capability is set to and modify that within the script. Currently, it is set to ```sm_70``` which is associated with the Volta architecture.


* Create a build directory and cd into there
Expand Down
2 changes: 1 addition & 1 deletion cmake/CMakeBasics.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
set(PACKAGE_BUGREPORT "[email protected]")

set(EXACONSTIT_VERSION_MAJOR 0)
set(EXACONSTIT_VERSION_MINOR 6)
set(EXACONSTIT_VERSION_MINOR 7)
set(EXACONSTIT_VERSION_PATCH \"0\")

set(HEADER_INCLUDE_DIR
Expand Down
2 changes: 2 additions & 0 deletions cmake/ExaConstitOptions.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ option(ENABLE_TESTS "Enable tests" OFF)

option(ENABLE_CUDA "Enable CUDA" OFF)

option(ENABLE_HIP "Enable HIP" OFF)

option(ENABLE_OPENMP "Enable OpenMP" OFF)

option(ENABLE_SNLS_V03 "Enable building library with v0.3.0+ of SNLS" OFF)
Expand Down
2 changes: 1 addition & 1 deletion cmake/blt
Submodule blt updated 697 files
10 changes: 10 additions & 0 deletions cmake/thirdpartylibraries/FindMFEM.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -136,5 +136,15 @@ if(NOT MFEM_FOUND)
message(FATAL_ERROR "MFEM_FOUND is not a path to a valid MFEM install")
endif()

if(ENABLE_HIP)
find_package(ROCSPARSE REQUIRED)
find_package(HIPBLAS REQUIRED)
find_package(ROCRAND REQUIRED)
endif()

if(ENABLE_CUDA)
find_package(CUDAToolkit REQUIRED)
endif()

message(STATUS "MFEM Includes: ${MFEM_INCLUDE_DIRS}")
message(STATUS "MFEM Libraries: ${MFEM_LIBRARIES}")
10 changes: 9 additions & 1 deletion cmake/thirdpartylibraries/FindRAJA.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,15 @@ if (EXISTS "${RAJA_RELEASE_CMAKE}")
endif()

find_package(RAJA REQUIRED)
find_package(camp REQUIRED)

if(camp_DIR AND (RAJA_VERSION_MINOR GREATER 10 OR RAJA_VERSION_MAJOR GREATER 0))
find_package(camp REQUIRED
NO_DEFAULT_PATH
PATHS ${camp_DIR}
${camp_DIR}/lib/cmake/camp
)
set(ENABLE_CAMP ON CACHE BOOL "")
endif()

if(RAJA_CONFIG_LOADED)
if(ENABLE_OPENMP)
Expand Down
6 changes: 5 additions & 1 deletion cmake/thirdpartylibraries/SetupThirdPartyLibraries.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ if (DEFINED MFEM_DIR)
TREAT_INCLUDES_AS_SYSTEM ON
INCLUDES ${MFEM_INCLUDE_DIRS}
LIBRARIES ${MFEM_LIBRARIES})
if (ENABLE_HIP)
find_package(HIPSPARSE REQUIRED)
endif()
else()
message(FATAL_ERROR "Unable to find MFEM with given path ${MFEM_DIR}")
endif()
Expand Down Expand Up @@ -61,7 +64,8 @@ if (DEFINED RAJA_DIR)
blt_register_library( NAME raja
TREAT_INCLUDES_AS_SYSTEM ON
INCLUDES ${RAJA_INCLUDE_DIRS}
LIBRARIES ${RAJA_LIBRARY})
LIBRARIES ${RAJA_LIBRARY}
DEPENDS_ON camp)
else()
message(FATAL_ERROR "Unable to find RAJA with given path ${RAJA_DIR}")
endif()
Expand Down
23 changes: 12 additions & 11 deletions scripts/install/unix_gpu_install_example.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
SCRIPT=$(readlink -f "$0")
BASE_DIR=$(dirname "$SCRIPT")
#change this to the cuda compute capability for your gpu
LOC_CUDA_ARCH='sm_70'
# LOC_CUDA_ARCH='sm_70'
#CMAKE_CUDA_ARCHITECTURES drops the sm_ aspect of the cuda compute capability
LOC_CUDA_ARCH='70'

# If you are using SPACK or have another module like system to set-up your developer environment
# you'll want to load up the necessary compilers and devs environments
Expand All @@ -15,7 +17,7 @@ LOC_CUDA_ARCH='sm_70'

# Build raja
if [ ! -d "raja" ]; then
git clone --recursive https://github.com/llnl/raja.git --branch v0.13.0 --single-branch
git clone --recursive https://github.com/llnl/raja.git --branch v2022.10.5 --single-branch
cd ${BASE_DIR}/raja
# Instantiate all the submodules
git submodule init
Expand All @@ -28,7 +30,7 @@ if [ ! -d "raja" ]; then
-DENABLE_OPENMP=OFF \
-DENABLE_CUDA=ON \
-DRAJA_TIMER=chrono \
-DCUDA_ARCH=${LOC_CUDA_ARCH} \
-DCMAKE_CUDA_ARCHITECTURESmbly=${LOC_CUDA_ARCH} \
-DENABLE_TESTS=OFF \
-DCMAKE_BUILD_TYPE=Release
make -j 4
Expand All @@ -54,13 +56,13 @@ if [ ! -d "ExaCMech" ]; then
cd ${BASE_DIR}/ExaCMech/build
# GPU build
cmake ../ -DCMAKE_INSTALL_PREFIX=../install_dir/ \
-DRAJA_DIR=${BASE_DIR}/raja/install_dir/share/raja/cmake/ \
-DRAJA_DIR=${BASE_DIR}/raja/install_dir/lib/cmake/raja/ \
-DENABLE_OPENMP=OFF \
-DENABLE_CUDA=ON \
-DENABLE_TESTS=OFF \
-DENABLE_MINIAPPS=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DCUDA_ARCH=${LOC_CUDA_ARCH} \
-DCMAKE_CUDA_ARCHITECTURESmbly=${LOC_CUDA_ARCH} \
-DBUILD_SHARED_LIBS=OFF
make -j 4
make install
Expand All @@ -75,7 +77,7 @@ fi
cd ${BASE_DIR}
if [ ! -d "hypre" ]; then

git clone https://github.com/hypre-space/hypre.git --branch v2.20.0 --single-branch
git clone https://github.com/hypre-space/hypre.git --branch v2.26.0 --single-branch
cd ${BASE_DIR}/hypre/src
# Based on their install instructions
# This should work on most systems
Expand Down Expand Up @@ -109,8 +111,7 @@ cd ${BASE_DIR}

if [ ! -d "metis-5.1.0" ]; then

curl -o metis-5.1.0.tar.gz http://glaros.dtc.umn.edu/gkhome/fetch/sw/metis/metis-5.1.0.tar.gz
tar -xzf metis-5.1.0.tar.gz
curl -o metis-5.1.0.tar.gz https://mfem.github.io/tpls/metis-5.1.0.tar.gz tar -xzf metis-5.1.0.tar.gz
rm metis-5.1.0.tar.gz
cd metis-5.1.0
mkdir install_dir
Expand Down Expand Up @@ -143,7 +144,7 @@ if [ ! -d "mfem" ]; then
-DHYPRE_DIR=${HYPRE_DIR} \
-DCMAKE_INSTALL_PREFIX=../install_dir/ \
-DMFEM_USE_CUDA=ON \
-DCUDA_ARCH=${LOC_CUDA_ARCH} \
-DCMAKE_CUDA_ARCHITECTURESmbly=${LOC_CUDA_ARCH} \
-DMFEM_USE_OPENMP=OFF \
-DMFEM_USE_RAJA=ON -DRAJA_DIR=${BASE_DIR}/raja/install_dir/ \
-DCMAKE_BUILD_TYPE=Release
Expand Down Expand Up @@ -178,12 +179,12 @@ if [ ! -d "ExaConstit" ]; then
cmake ../ -DENABLE_MPI=ON -DENABLE_FORTRAN=ON \
-DMFEM_DIR=${BASE_DIR}/mfem/install_dir/lib/cmake/mfem/ \
-DECMECH_DIR=${BASE_DIR}/ExaCMech/install_dir/ \
-DRAJA_DIR=${BASE_DIR}/raja/install_dir/share/raja/cmake/ \
-DRAJA_DIR=${BASE_DIR}/raja/install_dir/lib/cmake/raja/ \
-DSNLS_DIR=${BASE_DIR}/ExaCMech/install_dir/ \
-DENABLE_SNLS_V03=ON \
-DCMAKE_BUILD_TYPE=Release \
-DENABLE_CUDA=ON \
-DCUDA_ARCH=${LOC_CUDA_ARCH} \
-DCMAKE_CUDA_ARCHITECTURESmbly=${LOC_CUDA_ARCH} \
-DENABLE_TESTS=ON
# Sometimes the cmake systems can be a bit difficult and not properly find the MFEM installed location
# using the above. If that's the case the below should work:
Expand Down
Loading
Loading