TRISO Model XML Generation and HPC Simulation with different Parameters

Hello!

I am doing a work that would need me to simulate thousands of TRISO HTGR fuel block with different combinations of enrichment, boron content, and packing fraction. I’m using SLURM in an HPC, generating the xml files from a script of different combinations of these three using SLURM job array, then running the openmc through mpirun.

I would like to ask if there would be a way to accelerate my process? Do I need to generate the geometry and materials file again for every change I do?

Is there something wrong with how I’m going about this?

Thank you very much in advance.

bump~

I’ve also placed my current python code here I wanted to reduce the processing time. I’m using an HPC with 60 threads to do this job and it takes about 1 hour each run. I based this on the VHTR example. Is there something wrong with what I’m doing?

I have tried to increase the shape to a higher value as said here

but it increased the simulation time instead

HTTR.py (8.8 KB)

I haven’t done any work with TRISO fuel myself, but for my refuelling methodology in a depletion cycle simulation I need to update the materials and then the geometry, so I don’t think there is a way of getting around this. Even when the materials are assigned the same name they will not be replaced within the geometry material assignments that are already completed, therefore always requiring geometry to be reinitialized, although I think you should be able to limit it only to the sections you are directly changing material, and any universe they are packed into. If your reactor is symmetrical in design and you are not looking for highly spatially significant results, you might want to look at doing 1/4 or 1/2 core representations with a reflective boundary conditions (although there might need to be a little work done to make that boundary not mess with the distribution of the TRISO fuel depending on how you have it create the packing) as this would save a significant amount of time. The only thing else I can suggest is to change your counting statistics (particles/gen/batch) to a lower value until you have a result that agrees with what you are wanting to display (like a trend depending on TRISO placement for ex.) and then increasing the counting statistics afterwards to get a result within a variance that is acceptable to you.

I see thank you very much for the suggestions. I’ll probably look into symmetry. I’m currently already working with low particle counts here.

Hi Arcsin,
I am noticing that you use 5 generations for each batch. Is it necessary for your case? I am usually using 1 generation/batch, so I choose not to declare this setting parameter. 5 generation each batch means 4-5 time more calculation time for each batch. I recommend increasing the number of particles and active batches instead if you want to reduce your keff uncertainty, no need to set the generations_per_batch parameter.

# settings.generations_per_batch = 5

Also, that’s a good start to use a triso lattice with shape of = (4, 4, 4), because if you use (1,1,1) shape, then I think the calculation time will be longer. have you tried to use other shape configurations? (6,6,6) or (12,12,6) maybe? It’s been a long time ago, but the last time I did a sensitivity calculation for this triso lattice shape parameter, it could reduce the calculation time up to some degree. Because if the shape is too big, then the time to prepare distributed cell instances will also increase. I think you will find the shape that has been optimized for your specific case.

Then, I think it is quite normal for an openmc model with random geometry to take a longer time just to create the summary file. Your input needs 18 minutes just to write the summary.h5 file in my old i7 laptop. I hope you are ready for the core level calculation because it might take some time to read the hundred MB of geometry.xml and generate the summary file later.

I think you also noticed that, when generating a large random coordinate for TRISO, it need some time. i.e., for your 30% packing fraction in your 27.3*2 cm FCM/pin geometry, it needs around 3-4 to generate more than 170,000 coordinates using openmc.model.pack_spheres. So, I am recommending that you to save your coordinates and use them later to save time. Here I am commenting the script for generating the coordinate as dat file, and then using that file to save time for later simulation.

def filtered_coordinates(coordinates):
    filtered_coordinates = [coord for coord in coordinates if not math.sqrt(coord[0]**2 + coord[1]**2) <= 0.5 + (triso_outer_radius)]
    return filtered_coordinates

pack_region = -fuel_outer & -maxz & +minz

# # generate random triso position (centers)
# coordinates = openmc.model.pack_spheres(triso_outer_radius, pack_region, pf = 0.30, seed = 12345) # add seed for random generator
# print(f'#coordinates = {len(coordinates)}')
# #filter for middle coordinates to leave central coolant channel clear 
# filt_coords = filtered_coordinates(coordinates)
# 
# # export coordinates so we didnt need to regenerate the whole random sequence (approximately 3 min with i7)
# coordinates = np.array(coordinates)
# coordinates.dump(file='TRISOcoordinates.dat')
# filteredcoordinates = np.array(filt_coords)
# filteredcoordinates.dump(file='filteredTRISOcoordinates.dat')

# read saved filtered coordinates, 0.1 sec
filt_coords = np.load('filteredTRISOcoordinates.dat', allow_pickle=True)

print(f'#filtered coordinates = {len(filt_coords)}')

Last one, maybe, if you plan to use a smaller region to generate the random TRISO coordinate, i.e. just generating a random geometry inside a single fuel/FCM pellet, shorter height to reduce the number of random coordinates (lower than 170,000), then I think you can reduce the openmc preparation and calculation time because you can also use the 3D hex lattice to model your prismatic block.

I hope you can find the optimized model for your case scenario.

To add to what Wahid said, I don’t think there is a need to have it randomly redistribute new coordinates every time when you are looking at the effect of varying the composition of the TRISO. When looking at different packing faction you will have to still, but otherwise I think this step can be skipped for the other sensitivity analysis. I do feel like disagreeing about the gens/batches, as it can be useful to improve synchronization between batches, prevents under prediction of uncertainty within the results, and should actually decrease computational time from what I have seen. Without generations, the previous source sites are used (from a previous batch) and this correlates the batches and causes under predictions of uncertainty. I have even done analysis on this for my model (a full core CANDU) and found it significantly improved my computational time. For my model I use 10,000 particles/gen (you want at least 100 per core), 20 gens/batch, and 20 batches (if you have a HPC you can definitely do higher values if higher accuracy is required). I found from doing sensitivity analysis that you want to use similar gens/batch as batches, but you can do, and I would suggest to do it anytime you make a new model, a sensitivity analysis with your counting statistics.

Thanks Jarret, most of the time, I am only using 100-200 active batches in my assembly/core models, with more than 100,000 particles, sometimes 1,000,000 particles to reduce my uncertainty. But since Jarret said that the gen/batch could do that better, I might need to try that later. Thanks for the suggestion Jarret.

I think Arcsin can check the effects of gen/batch later, maybe through checking the Shannon entropy or checking the tallies for any neutron clustering effects on his graphite moderated reactor. With his computing power, I think that an easy task to compare some configuration of #particle/batch, gen/batch, #inactive batch and #total batch, to see which one has the better effects on reducing uncertainty vs calculation time that correlate to the neutron histories.

I have tried to run the HTTR input using a triso lattice shape (6,6,6) with my laptop, and the calculation time of 1gen/batch was reduced to 37 min, which is good compared to 57 min when I use the same triso lattice shape of (4,4,4). So Arcsin can try to find his triso lattice shape.

1 Like

Thanks a lot for these! I was actually thinking about the method you mentioned wahidlufthfi of saving the coordinates (since it was the place where I use a lot of time especially when I try higher pf) and you helped me a lot with providing me with a real direction on how to implement it. I tried out the higher shapes like (8,8,8) and even (8,8,15) but it only increased my time. I’ll try out (6,6,6) on my end to check if it decreases the time.

I’ll also try changing the generations/batch, it was something I didn’t change from the example I was using.

Saving the TRISO coordinates for specific packing fractions would really help me speed things up. I will try to figure out how I would implement it automatically in the HPC.

Thanks again for your suggestions!

1 Like