Running CASL chain segmentation fault

Hello!

I am testing the depletion capabilities of the newest version of OpenMC. I ran succesfully the pincell_depletion example that employs the chain_simple.xml file. Expanding the fuel composition a bit, and when I tried either the chain_casl.xml or the long endfb71 one, it crashes in the middle of the second or third transport calculation in a cluster. I attached both the input and the slurm output, for an mpi=1 and omp=72 in a node that I think has a lot of available memory (around 196 Gb in RAM, I think).

Any idea what could be?

Thanks and a happy 2020 !

Augusto

run_depletion.py (6.85 KB)

slurm-17914.out (36.9 KB)

Update: Problem solved

The problem was not in the memory (as I was thinking originally). The problem was in the H5 neutron reaction data I was using. I was using a library created in h5 by me from ACE files that we have at work. Instead, I downloaded the JEFF32 H5 library from the openmc website and now it works. So the segmentation fault problem was (I think) most probable in the angular data, as the exit signal mentioned at the beginning of the error.

2nd. update of the same issue:

Hi again,

I made additional tests with some different nuclear data libraries and depletion chains. Turns out that not always I am able to successfully execute to the end the depletion pin calculation.

For the following material composition, e.g.:

Hi Augusto,

I see that you have experience with the depletion capabilities of OpenMC 0.11.0, so, maybe you or someone else can help me. I am new simulating depletion and I am having some issues.

I strictly followed the steps of the Pincell Depletion example (https://docs.openmc.org/en/v0.11.0/examples/pincell-depletion.html) but I’m having an error, which I don’t understand. Attached is my input file and a figure where it is possible to observe that error (bottom right). To create the xml files, I am using Spyder 4.0.0 but I tried with Jupyter Notebook and I got the same error. If I eliminate where I define the depletion analysis, the simulation runs perfectly. For the depletion, I am using the chain_casl.xml.

Please, any suggestion to fix this?

Thanks in advance,
Javier

PincellDepletion.py (1.63 KB)

Dear Javier,

I did manage to run your input depletion case. It looks like the problem is that the code can’t find the cross_sections.xml file. Although from your input it seems that you tried correctly to let know the code the path for the cross_section file, it seems that it cannot find it (strange because as I mentioned, it seems correctly the definition). Thus, from your input, the only small change that I did was to comment the following line: ## materials.cross_sections = ‘/home/javier/Documents/OpenMC/DataENDF-7.1/cross_sections.xml’

Instead, I set the environment variable for the cross_sections file like this: export OPENMC_CROSS_SECTIONS= /home/ahsolis/cross_sections.xml (thus, you can do the following):

export OPENMC_CROSS_SECTIONS=/home/javier/Documents/OpenMC/DataENDF-7.1/cross_sections.xml

like that, I am pretty sure it will run for you as it did for me.

As a last note, is good that you are using the ENDFB71 H5 library, since as I’ve previously tested and commented before, it is the library that seems to be working for depletion calculations.

Augusto

Hi Augusto,

Thanks for your reply.

In fact, the way I defined the path to the cross sections is okay because the simulation runs before setting up the depletion. Now, I did what you suggested and it runs. So, it seems it is necessary to set the environment variable to the cross_sections.xml if you want to simulate depletion. I have three more questions:

1- If I want to try depletion with another nuclear data, I should set again the environment variable, right?

2- I read in the Pincell Depletion example that it is possible to create your own depletion chain but I did not find an example, do you have any idea?

3- You mentioned that you are using a cluster for your runs, I am also using a shared cluster and submit the job using a .sh file. Do you have any idea how to do this now with depletion? Without depletion, I only include “openmc” in that file and, when the resources that I request are available, the simulation starts. Now, with depletion, I am a little bit confused on how to submit the job.

Thanks,

Javier

Hi Javier,

The way I understand it works the depletion module of the new OpenMC (this is my opinion, hoping that I am ok but should always be backed up by the OpenMC developers ;-)) is that the transport calculation is carried out by the OpenMC executable that has been created with a C++ compiler, called via the shared openmc library. After the neutron spectra has been computed, then the depletion solver takes place via the Python API. I think this is the reason why if, you only run the transport calculation, then defining the materials.cross_sections attribute in the Python API is enough because it will export it to the materials.xml file. Nevertheless, if you carry out the additional execution of the depletion module, then such module will not look into the materials.xml file anymore but would look into the general environment variable OPENMC_CROSS_SECTIONS when updating the material composition at different burnup steps (I do not know if perhaps such cross_sections.xml file could be defined as an argument in the depletion.operator module, but I think that the best way either for only regular MC transport or with depletion capabilities, is to defined from the beginning the environment variable OPENMC_CROSS_SECTIONS. That is what I personally prefer to do).

Now, answering to your questions, I can personally say the following:

1- If I want to try depletion with another nuclear data, I should set again the environment variable, right?

Yes. If you would like to load data coming from different major nuclear libraries during different executions, you should define at every run the path where the code can globally find the required data.

2- I read in the Pincell Depletion example that it is possible to create your own depletion chain but I did not find an example, do you have any idea?

There is the module openmc.deplete.chain in the Python API that is employed to create the depletion chain in XML format. For an example, I guess you can always take a look at the script openmc-make-depletion-chain where if, you have the endf formatted neutron reaction, decay and fission yields data, the module can easily create such chain via the “from_endf” method (look at this webpage for more info: https://docs.openmc.org/en/latest/pythonapi/generated/openmc.deplete.Chain.html#openmc.deplete.Chain). I guess if you need more examples and clarification, we could wait for more aid from the OpenMC developers.

3- You mentioned that you are using a cluster for your runs, I am also using a shared cluster and submit the job using a .sh file. Do you have any idea how to do this now with depletion? Without depletion, I only include “openmc” in that file and, when the resources that I request are available, the simulation starts. Now, with depletion, I am a little bit confused on how to submit the job.

If you would only like to run a transport calculation, you only need the materials, geometry, settings (and if you like the extra tallies) xml files. This could be created with the Python API. Thus, to only execute the transport calculation (as you well mentioned in your question) you only need to call the executable openmc that would search for such xml files. Nevertheless, if you instead would like to run a depletion calculation, you are required to have a Python input file where you call the depletion modules. Therefore, when you execute it in a shared cluster, you would need to execute it as a python input file. This is the reason why if a depletion calculation would like to be executed in parallel, you need the python mpi4py module. Below, you can find two examples on how I run in the cluster the depletion calculation using either the torque or slurn launching programs. This is for a single node that has 72 cores. I assumed 4 mpi tasks and 18 cores per task (this is only my personal way of executing in parallel and is only an example. Doing it in this way in a single node would make it faster at the cost of available memory. Otherwise, you can set to go across nodes if you like and to have as many tasks as nodes or, on the other hand, to only have shared memory computations per node. To exemplify this last case, in my particular cluster you would then have 1 mpi and 72 cores per task if you follow the rule of mpi per node. All in all, these is up to the user then).

********* For SLURM *****************

#SBATCH -t 1:0:0
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=18

SBATCH -N 1

module load /home/ahsolis/modopenmc

time srun python run_depletion.py

module unload /home/ahsolis/modopenmc

Augusto,

Thank you for reporting the outcome of your debugging adventure! One possible candidate for the strange behavior with the JEFF data is that the depletion chains involve some isotopes that may not exist at certain temperatures in the JEFF data. I haven’t look in to this, but you could use the Python API to investigate. Isotopes can be search in the DepletionChain using
`

>>> "U235" in chain
True

`

With this feature, you could iterate over all the nuclides present in your JEFF cross section data, and check if they exist in the depletion chain. I am confused as to why it was necessary for you to create isotopes at zero concentration. The depletion sequence should be able to handle that. Can you expand on what happened using the JEFF 293 K cross sections and the full ENDF chain with and without adding this trace isotopes?

Also thank you for the detailed response to Javier’s remarks. I was just getting to those. You are correct in how the python API interfaces with the openmc library. Excellent!

Javier,

Augusto’s response for running depletion is great, please follow that. It is recommended that you have the OPENMC_CROSS_SECTIONS environment variable set, specifically for running depletion but it can be helpful for transport as well. This eliminates the need to pass the cross section data onto your Material definition. To permanently set this environment variable, place the following in any shell script (personal bashrc file or cluster submission script)
`

export OPENMC_CROSS_SECTIONS=/path/to/cross_sections.xml

`

To create your own depletion chain, follow the Chain.from_endf method. It requires three types of files: neutron reaction data, neutron induced fission yield data, and isotopic decay data.

I apologize for the delay in getting to this issue, but I hope this has been helpful.

Andrew

Thanks Augusto and Andrew for your responses!

Hello everyone,

Now I am having a new problme. I am trying to do my depletion analysis and I’m getting this error:

ERROR: Failed to open HDF5 file with mode ‘r’:
/home/gonzaj10/libraries/DataENDF-7.1/Am241.h5

I’m doing this analysis on a shared cluster. Using my PC and the same nuclear library, this error doesn’t appear and the simulation is successfully completed.
Any idea?

Thanks,
Javier

Dear Andrew,

Thanks for your reply. You requested me to expand more on the problem of the initialization of concentrations to zero in the material, while depleting with certain chain.xml and certain data library. So for instance, if I try to use the casl or the endfb71.xml depletion chain file along with the JEFF32 data set, if I don’t initialize some concentrations to zero in the depletable material then I get the following error:

Hi Augusto,

Thanks for reporting the problem you’re running into, and also thanks for your detailed (and correct) response to Javier. It does look like there’s something wrong with the JEFF 3.2 data (the 800 K cross section for MT=5 in Mn55 really is missing from the file). To me, it’s surprising that it works at all when you set the number density to zero; I would have expected it to fail for either case. How are you specifying temperatures for your problem? i.e., what are you specifying for settings.temperature? I’ll see if I can come up with an explanation for this all.

Best regards,
Paul

Hi Paul,

Thanks for your reply. The only way I specify a temperature was for the fuel material (an old habit I got from version 0.10.0). Below is how I needed to define the fuel material to make it execute with JEFF32 (even if it would end up crashing in a posteriori transport calculation). As you can see, I also needed to initialize to zero Cu63, Cu65 and Cf252, apart from Mn55:

uo2 = openmc.Material(name=‘Fuel Batch 1’)
uo2.set_density(‘g/cc’ ,10.499)
uo2.temperature = 600.0
uo2.add_nuclide(‘O16’ ,1.06723E-01,‘wo’)
uo2.add_nuclide(‘U235’ ,2.80E-01,‘wo’)
uo2.add_nuclide(‘U238’ ,6.13744E-01,‘wo’)
uo2.add_nuclide(‘Mn55’,0.0,‘wo’)
uo2.add_nuclide(‘Cu63’,0.0,‘wo’)
uo2.add_nuclide(‘Cu65’,0.0,‘wo’)
uo2.add_nuclide(‘Cf252’,0.0,‘wo’)
uo2.depletable = True

Other materials such as cladding, coolant and helium-gap were not set to any specific temperature, neither I specified a certain temperature any other way (i.e. settings file, cells, etc). In fact, I attached the input file I used, in case you want to take a look.

Best regards,

Augusto

run_depletion.py (7.1 KB)

Hi Augusto,

I have an update for you. When I ran your depletion script with the JEFF 3.2, I also ran into a segfault. I was able to track this issue down to some nuclides not having an angular distribution for elastic scattering specified (the one that caused problems for me was Te120). What I’ve done is put together a fix in the code so that in this case, an isotropic distribution is used. In addition, I’ve also fixed our data processing so that when converting an ACE file to our HDF5 format, if it sees this situation it will explicitly add an isotropic angular distribution, which will allow the current version of the code to work with updated data files. I’ve gone ahead and uploaded a new version of the JEFF 3.2 HDF5 files at openmc.org, so please re-download those and try your simulation again.

There is still an issue with Mn55 missing some reaction cross sections at 800 K, but this is an issue with the JEFF 3.2 ACE files so there is not much we can do about that. The only way around this would be to reprocess the data going through NJOY again. One thing you may want to try in your simulation is to use temperature interpolation and specify a temperature range over which data is loaded.

settings = openmc.Settings()
settings.temperature = {
‘method’: ‘interpolation’,
‘range’: (300.0, 1000.0)
}

Best regards,
Paul