Question for surface_source_write/read

hello. I am new to using OpenMC for shielding analysis combining neutron/photon transport.
Since the problem I want to solve is a deep penetration problem, I consider to use the surface_source_read/write function(SSR/SSW) provided by OpenMC.

For example in a cylinder-(or sphere-)shaped domain, inner region (including central source) named “A1” universe and outer region named “A2” universe.
One of my objectives is to see changing trends of shielding performance when A2 is changed(whereas, A1 is fixed).
Btw A1 and A2, I’ll set the surface for adapting SSR/SSW.

here is my main question.

  1. in the first calculation for SSW, A2 universe should be deleted?
    In my opinion, if A2 is exist, I think there will be distortion in the surface source writing.
    some particles passing the surface are backscattered and re-passing the surface are recorded to surface twice or more(right?).
    I’m afraid the end result(adapting SSR) will be the result taking into account excessive backscattering between A1 and A2.
    If my idea is wrong, and if A2 at the end should be considered in the SSW calculation step, I would like to hear the reason.

  2. (after Q1 is solved,) using SSR in the second calculation, can I delete the A1 universe and perform the calculation??
    My calculation plans to tally the flux spectrum of the A2 universe and do dose conversion, so I plan to ignore the results from the A1 universe.
    However, if tally in A2 universe is affected by backscatting or photon production occurring in the A1 universe, I should consider the A1 universe.
    But if my thoughts were wrong and it is okay to delete A1, I will delete it for the sake of calculation speed.

  3. I want to know the normalization of the tally when SSW/SSR method is used.
    Is it normalized to the original source at the time SSW is applied?
    Or is it normalized based on the SSR used in the second (third, or more) calculation?

Thanks for you all replies.

Welcome Sam,

  1. Yes, delete A2.

  2. Likely, you should include A1.
    If your tallies are close to border of these two zones, it is probably sensitive to the albedo from A1. If they are far away, they may not be. It depends on your exact problem.
    In some of my problems, I am sensitive to albedo scatter from A1 up to Xcm. I have done tests removing the A1 geometry beyond Xcm, and that did not give substantive speed-ups.
    But this is very problem specific.

  3. It is normalized to the particles run in the SSR calculation.
    OpenMC phase-spaces (h5 or mcpl) are not normalized or ‘self-aware’ of what the original source was. (In MCNP they are).
    You will have to keep track of the arithmetic yourself and impose it in post-processing the OpenMC SSR results.
    Say in the OpenMC SSW you ran 1e6 source neutrons, and the phase-space collected 500 particles. Then in the OpenMC SSR you run 1e3 particles. Your SSR tally normalization would be:
    (score * 1e3)/(2e6)
    or
    (score * 500)/(1e6)
    It depends on how you look at it, but ultimately you are getting that phase-space ‘collection’ ratio multiplied in.

The A1 and A2 albedo can often be fairly confusing, it’s good to run them with and without and see it in action. Compare it with a vanilla calc.
The phase-space ratios can also get confusing, my colleagues are perpetually baffled that MCNP onboards the ratio. Hopefully I got mine right there, I think I did, lol

You will likely want to include some variance reduction, that’s another rabbit hole.

Perry

1 Like

Oh sorry for late and thanks for your reply.

I’ll try to calculate as your suggestion. if some problems occur, I’ll visit here and make comment again.

yes. as you expected, I’ll try additional variance reduction like weight window if SSW/SSR is not enough to solve my problem. I just hope not to use this. haha.

SamKim