Hi, I modeled a quarter-core reactor with x and y reflective conditions to simulate a full-core depletion, and I wanted to test for memory constraints between each integrator, so I used the following lines of code within my Jupyter Notebook,
import os
import subprocess
pid = os.getpid()
cmd = f"psrecord {pid} --include-children --interval 10 --duration 21600 --log openmc_performance.log"
subprocess.Popen(cmd.split())
model = openmc.Model(geometry, materials, settings)
chain = openmc.deplete.Chain.from_xml(“/home/smrcenter/openmc/endfb80/chain_casl_pwr.xml”)
chain.export_to_xml(“chain_mod_q.xml”)
operator = openmc.deplete.CoupledOperator(model, “chain_mod_q.xml”)
cecm = openmc.deplete.PredictorIntegrator(operator, time_steps, power, timestep_units=‘d’, solver=‘cram16’).integrate()
I then plotted the CPU, real memory, and virtual memory over the duration of the run,
This run was completed using 20 batches, 20 gens, 5 inactive, 10000 particles, with a memory allocation in WSL settings set to 30Gb, and 24 core processor (2400% max utilization). The timestep was 0.1d followed by a 1.0 day, with tallies of reaction rates for fuel materials. The CPU utilization seems correct to me, and it appears CPU is used for transport, but not as much for depletion, whereas the inverse is true for memory. I further plotted real and virtual memory here,
If anyone could explain how I could have a peak of approximately 286 GB of real memory, with only 30 GB of ram (which I was under the impression was the only real memory), and a max virtual memory of, 1764 GB, whereas only 8Gb was allocated to swap space (which is what I thought virtual memory was, for hard faults). This might be arising from my misunderstanding of how linux separates memory. Thank you for your assistance.
Plotting the



