Dear All
Here is a picture from U235.h5 ENDF cross-section structure using HDFview.
The ENDF cross-section for U235.h5 which is in HDF5 format includes the energy groups as well as Reaction rates. The energy groups and reaction rates for MT 2, 3, 18, 102, 301, 444, 99 do have a lot of data points for specific temprestures such as
GROUP “energy” {
DATASET “0K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 242593 ) / ( 242593 ) }
}
DATASET “1200K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 50104 ) / ( 50104 ) }
}
DATASET “2500K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 43405 ) / ( 43405 ) }
}
DATASET “250K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 80882 ) / ( 80882 ) }
}
DATASET “294K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 76514 ) / ( 76514 ) }
}
DATASET “600K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 60631 ) / ( 60631 ) }
}
DATASET “900K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 53930 ) / ( 53930 ) }
}
}
I created a new HDF5 file for ENDF cross-section for U235.h5 with exactly similar file structure provided by OpenMC cross section but with fewer data points ( 5000). With a comparison, I confirm that they are really the same and I only have fewer data points in my newly generated file.
GROUP “energy” {
DATASET “0K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “1200K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “2500K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “250K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “294K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “600K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
DATASET “900K” {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 5000 ) / ( 5000 ) }
}
}
I replaced the new generated U235.h5 instead of the original one in the XS neutron data. After running the code for OpenMC Pincell depletion example, I still get the an error as below when I run my code:
Does anyone have an experience with it?
If we consider the first scenario as a problem with the amount of data points, Can I ask if a reduced amount of input data points is actually supported in OpenMC?
I have tried to find more information on the OpenMC source code for handling HDF5 files.
I found this code, but I couldn’t understand much from it.
Here I share the OpenMC’s source code for handling HDF5 files with you : https://github.com/openmc-dev/openmc/blob/develop/src/hdf5_interface.cpp
Also here I found the Source code for xsdata handling, too: https://github.com/openmc-dev/openmc/blob/develop/src/xsdata.cpp
When I search for the word dimension or array in the above links, I can find that some lines are related to dimension and array but due to my limited knowledge, I couldn’t understand it.
Could you please guide me about it?
My way of creating the new HDF5 XS was such that I wrote a Python script to use the structure of your provided HDF5 file and only use my own data points in those files as well as I could modify your files and keep any amount of data that I need.
I am aware that there is a script for creating HDF5 from ACE files in OpenMC: https://github.com/openmc-dev/openmc/blob/develop/scripts/openmc-ace-to-hdf5
I really would like to use this provided script for creating HDF5 from ACE files, but this script doesn’t let me to select the number of data points to store in the HDF5 file. Does anyone have an idea about selecting the number of data points in this script?
Another way is that I could reduce the number of data points in the ACE file and use the script provided by OpenMC developers for creating HDF5 from ACE files in OpenMC. Unfortunately, the structure of the ACE files provided are very complicated with many redundant MT numbers. So I am not familiar with how one can reduce the data points in the ACE files. Does anyone have an idea about selecting the number of data points and reducing them in the ACE files?
Best regards
Alex