Exit code -9 when running simulation

Hi folks,

I’m trying to use OpenMC in xsgen, in which I use OpenMC to calculate transport over many separate timesteps. It worked great for the first time-step (yay!).

In the second time step, I had transmuted the materials some, and as I ran OpenMC it exited after printing “Creating unionized energy grid…” with the exit code -9. What does this mean?

I’ve put some log files and the XML files on this gist: https://gist.github.com/jdangerx/d68f5358e34f9b7d619e

I’d be grateful for any insight into this.

Thanks,
John

I’m almost positive you ran out of memory. Try adding <energy_grid> nuclide </energy_grid> to your settings.xml and you should be all set.

Best regards,
Paul

Thanks! That worked! However I made it segfault. I recompiled with debug flags and got this:

At line 2266 of file /home/john/openmc/src/tally.F90
Fortran runtime error: Index ‘-1’ of dimension 1 of array ‘tally_maps%items’ below lower bound of 1

The full output and tallies.xml are here: https://gist.github.com/jdangerx/2c9ade73f1f2e007367d

John

Hi John- I’m not able to reproduce the segfault you’re getting. Did you make any changes to the xml files in that gist other than adding <energy_grid>? What version/git SHA1 of OpenMC are you using?

Best,
Paul

The only change was to add <energy_grid> to settings. The git SHA1 is:

c2d9f6d1a5dcced954d460952fabfa8795dcf71b

Hope that's helpful!

Thanks,
John

I just tried the version from the dev branch (git sha: 0f1270e0059d4b1135ca006d7bde6e07694f1036), and it threw the same error - though at a different line:

At line 2247 of file /home/john/openmc/src/tally.F90
Fortran runtime error: Index ‘-1’ of dimension 1 of array ‘tally_maps%items’ below lower bound of 1

It also runs into this error on both dev and master with the following config files:
https://gist.github.com/jdangerx/b5a0f07be79ec8a6e92c

John

Hello Paul,

I am also seeing a very similar error on my machine (below). My only thought is that this could be an issue triggered by the NNDC data set. John and I are both using this data. Which data set were you using? Could you try re-running with NNDC? Thanks!

Be Well
Anthony

$ openmc

.d88888b. 888b d888 .d8888b.
d88P" "Y88b 8888b d8888 d88P Y88b
888 888 88888b.d88888 888 888
888 888 88888b. .d88b. 88888b. 888Y88888P888 888
888 888 888 "88b d8P Y8b 888 “88b 888 Y888P 888 888
888 888 888 888 88888888 888 888 888 Y8P 888 888 888
Y88b. .d88P 888 d88P Y8b. 888 888 888 " 888 Y88b d88P
“Y88888P” 88888P” "Y8888 888 888 888 888 “Y8888P”
888____________________________________
888
888

Copyright: 2011-2014 Massachusetts Institute of Technology
License: http://mit-crpg.github.io/openmc/license.html
Version: 0.6.1
Git SHA1: 0f1270e0059d4b1135ca006d7bde6e07694f1036
Date/Time: 2014-12-02 12:58:01

I’ve done a little more digging on this, so here’s an info dump. I ask of you:

  1. What debugger do you recommend/use?

  2. Any idea why we are getting this error?

I’m still on the dev branch, and the segfault is still occuring here:

At line 2247 of file /home/john/openmc/src/tally.F90
Fortran runtime error: Index ‘-1’ of dimension 1 of array ‘tally_maps%items’ below lower bound of 1

I have done the following:

  • acquired MCNP - we were wondering if the culprit was the NNDC data, so I got MCNP and used their cross-sections instead. I still get the same segfault, at the same place in the code. So I guess it wasn’t a difference in the data.

  • run OpenMC with gdb:

I set a breakpoint at line 2247 of tally.F90 on the develop branch, where the error occurs. The erroring line is:
! Check how many elements there are for this item
n = size(tally_maps(filter_type) % items(filter_value) % elements)

When I tell gdb to p tally_maps or anything related (tally_maps(filter_type), etc) it gives me Unhandled dwarf expression opcode 0x97.

Here are the local variables at this point.

@(gdb) info locals

bin = 0
i_tally_check = 1076252894
n = 2

Additionally, I checked filter_value, since that’s what we’re indexing tally_maps%items by.

@(gdb) p filter_value
$3 = 1

That’s not -1! Here is filter_type, the other thing we’re indexing by, for good measure.

@(gdb) p filter_type
$4 = 2

I noticed a few lines earlier (2241) that we check to see if tally_maps(filter_type) % items(filter_value) % elements is allocated at all:

! If there are no scoring bins for this item, then return immediately
if (.not. allocated(tally_maps(filter_type) % items(filter_value) % elements)) then
bin = NO_BIN_FOUND
return
end if

(NO_BIN_FOUND is -1.) So I checked if this check was working. I expected this command to output .TRUE. :

@(gdb) p allocated(tally_maps(filter_type) % items(filter_value) % elements)
No symbol “allocated” in current context.

That’s strange - I think this might be an issue with GDB where it doesn’t know what “allocated” means. I searched around on Google and SO and have gotten some vague recommendations for Archer GDB which has “better Fortran support” but nobody with this specific issue.

Thanks!

John

Hi All,

It is worth noting that I have been able to reproduce this on my machine too.

Be Well
Anthony

I’m still unable to reproduce this for some reason. I am also using NNDC data, and I’ve tried with both develop and master. What compiler are you guys using, and how are you building? Also, are you both on Macs?

Thanks,
Paul

Hi Paul,

I think that we are both on Linux. I am on Ubuntu and John is on Arch. I am using gfortran v4.8.2. I am using make to build. Should I be using CMake?

Be Well
Anthony

Hi Paul,

I am on Arch, with gfortran v4.9.2. To build, I’m running:

cd src/build

cmake -DCMAKE_INSTALL_PREFIX=$HOME/.local --DEBUG …

make install

If you take a peek at the Makefile, you’ll see that it just calls CMake under the hood anyway, so that shouldn’t be an issue. I just tried running having compiled with both 4.8.2 and 4.9.2 and can’t reproduce the error. I’m running out of ideas here…

Paul

Weird. I am doing a fully clean build, install, and redownloading the NNDC data. John, can you do the same?

Hi Paul,

Are you converting your NNDC data to the binary format, or are you leaving them in ASCII. I am leaving them in ASCII.

Be Well
Anthony

Mine are converted to binary – I suppose that could be a source of differences. I’ll try with ASCII when I have a chance.

Paul

So I just tried this with binary NNCD and it has the same error.

I’ll try redownloading the NNDC data to see if I get the error that way.

Paul

I’ll try redownloading the NNDC data to see if I get the error that way.

Paul

For what it’s worth the bug is still around on my system after a full rebuild and redownload of the NNDC data, converted to binary.

I was grousing about this to a friend and he wondered if there were some other library being linked in that was different between our systems, do you think that might be worth considering?