The answer is “yes and no”. In a world without limited precision floating point numbers, the results truly would be exactly the same no matter how many MPI processes are used. In practice, this usually holds true in a run for a while, but eventually there will be some difference in a result due to the order of operations being different, e.g., an MPI_REDUCE of tally results across multiple processes.
@maximeguo No, statistical convergence should look the same whether you are using MPI parallelization or not. There is no way to set a different seed for each node nor is that necessary – the parallelization works by dividing the total number of particles per batch over the MPI processes, and each particle is initialized with a different seed (that is based on the starting seed), so results from different MPI processes are independent of one another.
Thanks a lot. When openmc do tally in MPI parallelization, it exchange data after each batch. So a large particles per batchs is favorable. But, how much particle per batch is a good option? For example, a calculation is 500 b * 50000 p/b. Could we change it, in a extreme way, to 5 b * 5000000 p/b?
The best advice I can give is that your particles per batch should be large enough such that the time being spent on tally synchronization should be a small percentage of the overall runtime. If you have a lot of tallies and not so many particles per batch, you could end up wasting a lot of time on parallel communication. However, you should be cautious not to go too far – a batch is a statistical realization, so if you only have 5 batches, you only have 5 realizations for each tallied quantity. This means, e.g., that you will have a higher Student’s t factor on confidence intervals.