Generic interface to write a dataset Supported types.
More...
Generic interface to write a dataset Supported types.
- integers (scalar and 1d-6d arrays)
- doubles (scalar and 1d-6d arrays)
- reals (scalar and 1d-6d arrays)
- string (scalar and 1d-2d arrays)
- complex double number (compound data type, "r"/"i" for real and imaginary part, scalar and 1d-6d arrays)
- Parameters
-
[in] | loc_id | local id in file, e.g. file_id, group_id |
[in] | dset_name | name of dataset, NOTE: HDF5 assumes the dataset doesn't exist before !! |
[in] | array | data array to be written |
[in] | chunks | (optional, deprecated) chunk size for dataset |
[in] | filter | (optional, deprecated) filter to use ('none', 'szip', 'gzip', 'gzip+shuffle') |
[in] | processor | (optional, default=-1) processor that provides the data, -1 if the data is the same on all processors. |
if processor != -1, the following options become useless
- Parameters
-
[in] | axis | (optional, default=-1) dimension on which the data will be stacked, starting from 1 |
- a variable with the name "num_reacts". All processors have the same data. We only want one copy saved in the final file call hdf_write_dataset(file_id, "num_reacts", num_reacts)
- a variable with the name "num_reacts_0". All processors have this variable but different shape and content. we only want to save the copy on processor 0 (first processor) call hdf_write_dataset(file_id, "num_reacts_0", num_reacts_0, processor=0)
- a scalar variable with the name "num_particle". All processors have this variable with different values. We want to save all of them. call hdf_write_dataset(file_id, "num_particle", num_particle, axis=1) (Note: the result of this call is an array with its size equal to the number of processors will be save to the hdf5 file. If you want to add them up and then save the result, you need to do the calculation by yourself)
- an array "x_loc(XSIZE, YSIZE, ZSIZE)". All processors have this variable with different values. In addition, the meaningful data is not the whole array but "x_loc(xstart:xend, ystart:yend, 1:num)" (xstart, xend, ystart, yend are the same across processors but num is has different values on each processor). To stack the variable on the third dimension, use the following call. The API will set hyperslab based on the shape of array provided call hdf_write_dataset(file_id, "x_loc", x_loc(xstart:xend, ystart:yend, 1:num), axis=3) The function will also save the number of rows contributed by each processor as an attribute.
The documentation for this interface was generated from the following file:
- /home/hluo171851/Git/HDF5_utils/hdf5_utils_mpi.f90