How do I make a surface source?

Hi there,

I am relatively new to using OpenMC, so apologies if my question seems a little trivial. I need to create a custom source, I want to make it an isotropic neutron source in the shape of a spherical surface. As I cannot just define a cell and “fill” it with the source, is it possible to make a surface source instead of a point source? How would I go about that?

Thanks in advance!

1 Like

Hi @SophiaVT ,

Welcome to the forum.

A recent PR made it possible. For that, You have to use the current develop branch of openmc from github.

Hope this will help

1 Like

Hi @Pranto,

May I ask you something about it?
I am trying to use the develop branch of OpenMc for this purpose, and I am struggling with it. If I understood it correctly, in a script I collect the particles recalling the function set_section.surf_source_write = {'surface_ids': [3], 'max_particles': 5000}, and another one where I recall the file just created because of the run of the previous one, as
path = '/home/path/to/folder/write_source'
set_section.surf_source_read = {'path': 'surface_source.h5'}

When I run the second script, it returns me
ERROR: Source file 'surface_source.h5' does not exist
Have you already tried it?

Much appreciated,
Tony

Hi @tony_emme,Try with

set_section.surf_source_read = {'path': '/home/path/to/folder/write_source/surface_source.h5'}
1 Like

Hi @Pranto

Firstly, I want to thank you for the observation.
In your experience, do you know if there is a threshold number of max particle to be collected on a surface using such a feature?

There’s no threshold value of number of particles to be banked on a specified surface. The size of the source bank will be max_particles x number of processors you're running.

Okay…
Then I need to figure out why if I try to collect 1e8 as max_particles it returns me this error:

Reading settings XML file…
terminate called after throwing an instance of ‘std::out_of_range’
what(): stoi
Traceback (most recent call last):
File “/home/path/to/data/write_f.py”, line 380, in
openmc.run(threads=16)
File “/home/path/openmc/openmc/executor.py”, line 218, in run
_run(args, output, cwd)
File “/home/path/openmc/openmc/executor.py”, line 28, in _run
raise subprocess.CalledProcessError(p.returncode, ’ '.join(args),
subprocess.CalledProcessError: Command ‘openmc -s 16’ died with <Signals.SIGABRT: 6>.

While if I go for something lower, it runs. Any ideas why such behavior?

Sorry for my late reply @tony_emme.

std::out_of_range exception is thrown when code attempts to access elements out of defined range.

Can you share your input files?

No need of apologies, you are helping me.
Anyway, write_file.py (2.9 KB) is a simple test I have been using to better understand the new feature. It seems that the script runs until the max_particles does not exceed a certain threshold. Or at least, that is what happens to me.

Any insight will be helpful
Tony

@tony_emme I didn’t see any error while running your code. Can I see your settings.xml file?
Did you try running your code from terminal?

settings.xml (561 Bytes)

Here is the settings.xml file. Btw, have you tried to run with ‘max_particles:’ 1billion or even 1trillion ?
If I do so, in the first case it returns me

terminate called after throwing an instance of ‘std::bad_alloc’ <

while in the second case,

terminate called after throwing an instance of ‘std::out_of_range’

@tony_emme An exception of type std::bad_alloc indicating that you ran out of memory. You have to increase memory in that case.

I am not sure I understood correctly, you mean the ram of the PC or something else?

Yep, Ram of your PC.

@tony_emme How much RAM do you have on your PC?

It can be possible, but I have a pretty new pc desktop, with 8core i7-9700 cpu & 16Gb of ram. I know that these type of calculation, especially if not tailored properly, can cost a lot of memory, but I hardly imagine it already reached its limit

Looking at the code

I think, YoungHui Park might be intended to limit the value range to less than the type range.

You can give it a try with the following change

// Get maximum number of particles to be banked per surface
    if (check_for_node(node_ssw, "max_particles")) {
      max_particles = std::stoll(get_node_value(node_ssw, "max_particles"));
    }
  }
1 Like

bad_alloc

for what I know, it makes sense, but before I go for that road, I wanna try something else. I am not that confident with c++ to modify the source code.
But thanks for sure for all the help you gave to me. Much appreciated
Tony

So it sounds like there are two issues here. One is that if you have allow max_particles to be too large, you may run out of memory. An array is allocated with a size equal to max_particles and each element of the array needs 92 bytes, so if you have max_particles equal to 1e8, you’ll need 9.2 GB of memory. The other issue is that even if you do have the required memory, if the number of particles you’re trying to bank is greater than the limit of a 32-bit integer (about 2 billion), it doesn’t work. @Pranto’s suggested change should fix that (@Pranto do you mind submitting a PR with that fix?). However, I would ask whether you really need max_particles to be that high. Do you really need to collect that many source particles for the problem at hand?