I was wondering if there is a method to determine the run simulation time for different stages of depletion simulations. I want to compare the use of different depletion predictor integrators for simulating the depletion of a full-core CANDU reactor. I use Jupyter notebooks to run my code. I read from a paper by Yu and Forget and they mentioned that for most simulations the transport portion of the depletion simulation is what is time dominant, except for big models such as a full-core every rod depletion, where the depletion step can become dominant. Since the different integrators can have a greater impact on one or the other stage, I wanted to complete my own analysis on my reactor design by gathering the simulation time for each transport and depletion step. From what I can see I will have to monitor the simulation and time the transport section myself with a stopwatch, and then time the depletion section, which I assume is being completed when the simulation is creating the statepoint though I would like confirmation about this as well. Lastly, if you have a tip on which integrator is preffered that would also be appreciated.
Hello @Jarret,
One way to do it is by simply adding lines in Integrator.integrate()
function in deplete/abc.py
. Probably will be something like this:
for i, (dt, source_rate) in enumerate(self):
if output and comm.rank == 0:
print(f"[openmc.deplete] t={t} s, dt={dt} s, source={source_rate}")
tickTransport = perf_counter()
# Solve transport equation (or obtain result from restart)
if i > 0 or self.operator.prev_res is None:
n, res = self._get_bos_data_from_operator(i, source_rate, n)
else:
n, res = self._get_bos_data_from_restart(source_rate, n)
tockTransport = perf_counter()
# Solve Bateman equations over time interval
proc_time, n_list, res_list = self(n, res.rates, dt, source_rate, i)
# Insert BOS concentration, transport results
n_list.insert(0, n)
res_list.insert(0, res)
# Remove actual EOS concentration for next step
n = n_list.pop()
tockDepletion = perf_counter()
print(f"Transport time = {tockTransport-tickTransport}s\nDepletion time = {tockDepletion - tockTransport}s")
StepResult.save(self.operator, n_list, res_list, [t, t + dt],
source_rate, self._i_res + i, proc_time, path)
t += dt
For the recommendation of integrator algo, Dr. Josey’s thesis is very insightful:
Thank you for you assistance, I was not even thinking about editing the py files that the modules openmc use. I did the changes you suggested and got some timings, with transport taking 1062.98…, and depletion taking 839.52… for a predictor indicator at 0.1d time interval. This seems expected since from the documentation I have read, usually transport is significantly more dominant, except they stated for full assemblies, depletion will become more prominent. Also thanks for the document, that gave me further information on depletion integrator. I think I am being bottle necked from memory with it sitting consistently at half the systems available, which will probably force me to use the most memory conservative option, as well as, since I am going to be modelling online refuelling between depletion steps I believe I don’t need the most accurate option as the timesteps will be limited to a day or two at most, so faster simulation is also preferred. Lastly, if you happen to know of a way to monitor the memory usage of a specific depletion run to compare them between I would also appreciate the further help.
Your welcome, I’m glad my janky solution helps you
But I’m not following you when you wrote “being bottle necked from memory with it sitting consistently at half the systems available”, did that mean you only use 50% of your available memory (e.g 16 gb used / 31 gb available) or you use your swap memory to its 50% capacity?
Also monitoring memory is very, very hard. If only memory usage can be tracked extensively, we wouldn’t need to be concerned with memory leaks. However, one can simply run top
for each seconds, and dump the output into a txt file (it will be a huge txt file at the end of your simulation). Then using regex and some strings manipulation we may extracted the monitored memory usage of the simulation and plot it.
Best regards,
Chris
Currently my vm is setup for the default half of the pcs memory, I’ll be increasing it, but I won’t be able to get much more out of it. I’ll have to take a look at the more memory saving integrator’s and see if it operates faster than expected compared to other peoples estimate, and that might tell me if memory is the big constraint. I’ll give your suggestion an attempt as well, thanks again for your assistance, its been very informative.
After reviewing the times I gathered with the above method, and measuring timings using a script that measured the VM’s overall memory and cpu usage during a depletion simulation I discovered that the method here does not accurately measure the time. When measuring the computers metrics the period of transportation and depletion are easily identified since during transport the cpu usage is maxed to whatever cpu usage is set to, while memory usage is slightly reduced, but during the depletion step the cpu usage would drop significantly (from ~89% to 4.2% for my 24 core setup) and memory would see a slight increase over the depletion step, as you can see from the attached image of my computer specs during a 2 timestep Predictor integrator run (0.1d and 1.0 d successively)
.
I don’t know the specific reasons why, though I think it ignores the last transport step that is attached to all of the integrators, Ie, predictor indicator has 1 transport step and one depletion step per timestep, but if you ran it for two timesteps it actually completes 3 transport steps since I assume OpenMC wants to remeasure the k-eff after all of the depletions were completed. There were additional discrepancies that I found outside of this last steps influence, though I cannot attribute them to anything. I would suggest if anyone is interested in measuring the time, besides doing it yourself with a stopwatch, I would use the this method.