Hi Javier,
The way I understand it works the depletion module of the new OpenMC (this is my opinion, hoping that I am ok but should always be backed up by the OpenMC developers ;-)) is that the transport calculation is carried out by the OpenMC executable that has been created with a C++ compiler, called via the shared openmc library. After the neutron spectra has been computed, then the depletion solver takes place via the Python API. I think this is the reason why if, you only run the transport calculation, then defining the materials.cross_sections attribute in the Python API is enough because it will export it to the materials.xml file. Nevertheless, if you carry out the additional execution of the depletion module, then such module will not look into the materials.xml file anymore but would look into the general environment variable OPENMC_CROSS_SECTIONS when updating the material composition at different burnup steps (I do not know if perhaps such cross_sections.xml file could be defined as an argument in the depletion.operator module, but I think that the best way either for only regular MC transport or with depletion capabilities, is to defined from the beginning the environment variable OPENMC_CROSS_SECTIONS. That is what I personally prefer to do).
Now, answering to your questions, I can personally say the following:
1- If I want to try depletion with another nuclear data, I should set again the environment variable, right?
Yes. If you would like to load data coming from different major nuclear libraries during different executions, you should define at every run the path where the code can globally find the required data.
2- I read in the Pincell Depletion example that it is possible to create your own depletion chain but I did not find an example, do you have any idea?
There is the module openmc.deplete.chain in the Python API that is employed to create the depletion chain in XML format. For an example, I guess you can always take a look at the script openmc-make-depletion-chain where if, you have the endf formatted neutron reaction, decay and fission yields data, the module can easily create such chain via the “from_endf” method (look at this webpage for more info: https://docs.openmc.org/en/latest/pythonapi/generated/openmc.deplete.Chain.html#openmc.deplete.Chain). I guess if you need more examples and clarification, we could wait for more aid from the OpenMC developers.
3- You mentioned that you are using a cluster for your runs, I am also using a shared cluster and submit the job using a .sh file. Do you have any idea how to do this now with depletion? Without depletion, I only include “openmc” in that file and, when the resources that I request are available, the simulation starts. Now, with depletion, I am a little bit confused on how to submit the job.
If you would only like to run a transport calculation, you only need the materials, geometry, settings (and if you like the extra tallies) xml files. This could be created with the Python API. Thus, to only execute the transport calculation (as you well mentioned in your question) you only need to call the executable openmc that would search for such xml files. Nevertheless, if you instead would like to run a depletion calculation, you are required to have a Python input file where you call the depletion modules. Therefore, when you execute it in a shared cluster, you would need to execute it as a python input file. This is the reason why if a depletion calculation would like to be executed in parallel, you need the python mpi4py module. Below, you can find two examples on how I run in the cluster the depletion calculation using either the torque or slurn launching programs. This is for a single node that has 72 cores. I assumed 4 mpi tasks and 18 cores per task (this is only my personal way of executing in parallel and is only an example. Doing it in this way in a single node would make it faster at the cost of available memory. Otherwise, you can set to go across nodes if you like and to have as many tasks as nodes or, on the other hand, to only have shared memory computations per node. To exemplify this last case, in my particular cluster you would then have 1 mpi and 72 cores per task if you follow the rule of mpi per node. All in all, these is up to the user then).
********* For SLURM *****************
#SBATCH -t 1:0:0
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=18
SBATCH -N 1
module load /home/ahsolis/modopenmc
time srun python run_depletion.py
module unload /home/ahsolis/modopenmc