Hi Arcsin,
I am noticing that you use 5 generations for each batch. Is it necessary for your case? I am usually using 1 generation/batch, so I choose not to declare this setting parameter. 5 generation each batch means 4-5 time more calculation time for each batch. I recommend increasing the number of particles and active batches instead if you want to reduce your keff uncertainty, no need to set the generations_per_batch parameter.
# settings.generations_per_batch = 5
Also, that’s a good start to use a triso lattice with shape of = (4, 4, 4), because if you use (1,1,1) shape, then I think the calculation time will be longer. have you tried to use other shape configurations? (6,6,6) or (12,12,6) maybe? It’s been a long time ago, but the last time I did a sensitivity calculation for this triso lattice shape parameter, it could reduce the calculation time up to some degree. Because if the shape is too big, then the time to prepare distributed cell instances will also increase. I think you will find the shape that has been optimized for your specific case.
Then, I think it is quite normal for an openmc model with random geometry to take a longer time just to create the summary file. Your input needs 18 minutes just to write the summary.h5 file in my old i7 laptop. I hope you are ready for the core level calculation because it might take some time to read the hundred MB of geometry.xml and generate the summary file later.
I think you also noticed that, when generating a large random coordinate for TRISO, it need some time. i.e., for your 30% packing fraction in your 27.3*2 cm FCM/pin geometry, it needs around 3-4 to generate more than 170,000 coordinates using openmc.model.pack_spheres. So, I am recommending that you to save your coordinates and use them later to save time. Here I am commenting the script for generating the coordinate as dat file, and then using that file to save time for later simulation.
def filtered_coordinates(coordinates):
filtered_coordinates = [coord for coord in coordinates if not math.sqrt(coord[0]**2 + coord[1]**2) <= 0.5 + (triso_outer_radius)]
return filtered_coordinates
pack_region = -fuel_outer & -maxz & +minz
# # generate random triso position (centers)
# coordinates = openmc.model.pack_spheres(triso_outer_radius, pack_region, pf = 0.30, seed = 12345) # add seed for random generator
# print(f'#coordinates = {len(coordinates)}')
# #filter for middle coordinates to leave central coolant channel clear
# filt_coords = filtered_coordinates(coordinates)
#
# # export coordinates so we didnt need to regenerate the whole random sequence (approximately 3 min with i7)
# coordinates = np.array(coordinates)
# coordinates.dump(file='TRISOcoordinates.dat')
# filteredcoordinates = np.array(filt_coords)
# filteredcoordinates.dump(file='filteredTRISOcoordinates.dat')
# read saved filtered coordinates, 0.1 sec
filt_coords = np.load('filteredTRISOcoordinates.dat', allow_pickle=True)
print(f'#filtered coordinates = {len(filt_coords)}')
Last one, maybe, if you plan to use a smaller region to generate the random TRISO coordinate, i.e. just generating a random geometry inside a single fuel/FCM pellet, shorter height to reduce the number of random coordinates (lower than 170,000), then I think you can reduce the openmc preparation and calculation time because you can also use the 3D hex lattice to model your prismatic block.
I hope you can find the optimized model for your case scenario.