GEOS
GEOS copied to clipboard
[Other] Supercritical CO2 Fracture
Hello, I want to simulate a fracture made by using supercritical CO2 as a fracturing fluid.
But I'm having a hard time trying to do it, (I cannot run the hydraulic fracture example) it was difficult just to install GEOSX.
Could someone please help me or give an advice?
@DragonBalerion Please follow the guide (see here) step by step to download and compile GEOSX on your machine. To better assist you, if encountering any issue on certain steps, please give more details and post your error message/log here.
Hello @jhuang2601, thank you for answering me. I downloaded and compiled GEOSX successfully. I also tested it with "ctest -V" without errors.
I have Debian 11 with the following version of cpp, g++, cmake, and make: cpp (Debian 10.2.1-6) 10.2.1 20210110 g++ (Debian 10.2.1-6) 10.2.1 20210110 cmake version 3.18.4 GNU Make 4.2.1
I also have the Advanced XML Features installed correctly.
When I try to run heterogeneousInSitu_benchmark.xml file, I face some problems:
-
I have no idea what is "srun" so I cannot run the command of the example
srun -n 36 -ppdebug geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 6 -y 2 -z 3 -o hf_results
bash: srun: command not found
-
I tried to run it without srun, but I got an error.
geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 6 -y 2 -z 3 -o hf_results
heterogeneousInSitu_benchmark.xml.preprocessed
Max threads: 8
GEOSX version 0.2.0 (develop, sha1: 35e83dc8e)
Adding Mesh: InternalMesh, mesh1
Adding Geometric Object: Box, source_a
Adding Geometric Object: Box, perf_a
Adding Geometric Object: ThickPlane, fracturable_a
Adding Solver of type Hydrofracture, named hydrofracture
Adding Solver of type SolidMechanicsLagrangianSSLE, named lagsolve
Adding Solver of type SinglePhaseFVM, named SinglePhaseFlow
Adding Solver of type SurfaceGenerator, named SurfaceGen
Adding Output: VTK, vtkOutput
Adding Output: Silo, siloOutput
Adding Output: Restart, restartOutput
Adding Event: SoloEvent, preFracture
Adding Event: PeriodicEvent, outputs_vtk
Adding Event: PeriodicEvent, outputs_silo
Adding Event: PeriodicEvent, solverApplications
Adding Event: PeriodicEvent, pumpStart
TableFunction: flow_rate
TableFunction: sigma_xx
TableFunction: sigma_yy
TableFunction: sigma_zz
TableFunction: init_pressure
TableFunction: bulk_modulus
TableFunction: shear_modulus
TableFunction: apertureTable
Adding Object CellElementRegion named Domain from ObjectManager::Catalog.
Adding Object SurfaceElementRegion named Fracture from ObjectManager::Catalog.
***** ERROR
***** LOCATION: /home/jp/unam_phd/codes/GEOSX/src/coreComponents/mesh/mpiCommunications/SpatialPartition.cpp:173
***** Controlling expression (should be false): check != m_size
***** Rank 0:
Expected check == m_size
check = 36
m_size = 1
** StackTrace of 7 frames **
Frame 0: geosx::InternalMeshGenerator::generateMesh(geosx::DomainPartition&)
Frame 1: geosx::MeshManager::generateMeshes(geosx::DomainPartition&)
Frame 2: geosx::ProblemManager::generateMesh()
Frame 3: geosx::ProblemManager::problemSetup()
Frame 4: geosx::GeosxState::initializeDataRepository()
Frame 5: main
Frame 6: __libc_start_main
Frame 7: _start
=====
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
@DragonBalerion The posted issue is related to your MPI run, where the number of running cores should be specified and compatible with the multiplication of 6X2X3 (-x 6 -y 2 -z 3
).
Can you try mpirun -n 36 geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 6 -y 2 -z 3 -o hf_results
Yes, of course.
When I try mpirun -n 36 geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 6 -y 2 -z 3 -o hf_results
I get
--------------------------------------------------------------------------
There are not enough slots available in the system to satisfy the 36
slots that were requested by the application:
geosx_preprocessed
Either request fewer slots for your application, or make more slots
available for use.
A "slot" is the Open MPI term for an allocatable unit where we can
launch a process. The number of slots available are defined by the
environment in which Open MPI processes are run:
1. Hostfile, via "slots=N" clauses (N defaults to number of
processor cores if not provided)
2. The --host command line parameter, via a ":N" suffix on the
hostname (N defaults to 1 if not provided)
3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
4. If none of a hostfile, the --host command line parameter, or an
RM is present, Open MPI defaults to the number of processor cores
In all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use the
--use-hwthread-cpus option.
Alternatively, you can use the --oversubscribe option to ignore the
number of available slots when deciding the number of processes to
launch.
--------------------------------------------------------------------------
@DragonBalerion You need to check available cores (probably less than 36) on your machine by using nproc
and then adjust your MPI settings.
Yes, you are right. I have far less than 36 core. I only have 8. I tried mpirun -n 8 geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 2 -y 2 -z 2 -o hf_results
, but I got the same error.
Then I tried mpirun --use-hwthread-cpus geosx_preprocessed -i heterogeneousInSitu_benchmark.xml -x 2 -y 2 -z 2 -o hf_results
. It is still running, but I like to believe it is going to be ok (Well, It is going to take forever with only eight cores.)
Now I'll try to modify it to have the supercritical CO2 as the fracture fluid instead of water.
Do you have any advice for that? or Do you know if that is possible?
Thank you very much for your time.
@DragonBalerion Can you confirm if you can finish the running of this tutorial example on your machine? To model CO2 fracturing, you need to update the PVT functions for supercritical CO2 (see here) and turn on matrix permeability model to handle fluid leakoff.
Thank you very much for your time, I am going to check it.
Sorry for taking too long to reply; I can't run that example because my computer doesn't have a GPU. I borrowed a laptop with a GPU from a friend, but unfortunately, I couldn't install GEOSX on it. I will change my cmake file to see if I find my mistake.
Do you know if I can run it with another linear solver package that doesn't require a GPU?
@DragonBalerion For this tutorial example, it is running with multiple CPUs and no need for GPU.
Oh. I made a mistake. I read I need Hypre, and I saw ENABLE_HYPRE_CUDA
somewhere when I searched for it and thought that I needed a GPU.
When I run the example mpirun -np 4 geosx -i SimpleCo2InjTutorial.xml -x 1 -y 1 -z 4
, I got an error that said the file didn't exist, but for some reason, I thought it was because I didn't enable Hypre Cuda.
Now I realize that if I run mpirun -np 4 geosx -i simpleCo2InjTutorial_smoke.xml -x 1 -y 1 -z 4
(the name of the file is a little different) it runs.