There are a number of things could be happening here. From the output, it’s hard to say.
One thing that might help is to run the openmc executable from the command line instead of the Python interpreter. Useful information to help debug can sometimes be filtered out of the Python output. Increasing the verbosity setting of the run might provide more information as well.
I’m honestly not sure. I was expecting more of an indication as to what was going wrong. Is there any chance you can share this model so I can dig into it further?
@Fahima I’m not 100% sure why the segfault is happening, but I think it may be the following sequence of events:
Particles are being lost as they are transported
The lost particle limit is hit on one OpenMP thread, which proceeds to call fatal_error in the program, which calls std::exit
The call to std::exit starts tearing down memory
Meanwhile, other OpenMP threads are still trying to execute and eventually hit some piece of memory that no longer exists, causing a segfault.
If you run with a single thread (openmc -s 1), you don’t get a segfault but you’ll see a bunch of errors about lost particles. I would recommend following the advice given in our documentation about how to diagnose geometry errors.
@paulromano I was trying to debug with only one thread. When I was trying with 1E4 particles it run successfully. However, when I run it with 1E5 it shows segment fault after a while. It just show segment fault even though I was running with the command openmc -s 1. I am again attaching all of my files here if you can help with anything I would really appreciate. geometry.xml (618.8 KB) materials.xml (20.4 KB) settings.xml (327 Bytes)