generic
generic
Generic functions for working with all types of dfs files.
Functions
| Name | Description |
|---|---|
| avg_time | Create a temporally averaged dfs file. |
| change_datatype | Change datatype of a DFS file. |
| concat | Concatenates files along the time axis. |
| diff | Calculate difference between two dfs files (a-b). |
| extract | Extract timesteps and/or items to a new dfs file. |
| fill_corrupt | Replace corrupt (unreadable) data with fill_value, default delete value. |
| quantile | Create temporal quantiles of all items in dfs file. |
| scale | Apply scaling to any dfs file. |
| sum | Sum two dfs files (a+b). |
avg_time
generic.avg_time(infilename, outfilename, skipna=True)Create a temporally averaged dfs file.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| skipna | bool | exclude NaN/delete values when computing the result, default True | True |
change_datatype
generic.change_datatype(infilename, outfilename, datatype)Change datatype of a DFS file.
The data type tag is used to classify the file within a specific modeling context, such as MIKE 21. There is no global standard for these tags—they are interpreted locally within a model setup.
Application developers can use these tags to classify files such as bathymetries, input data, or result files according to their own conventions.
Default data type values assigned by MikeIO when creating new files are: - dfs0: datatype=1 - dfs1-3: datatype=0 - dfsu: datatype=2001
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| datatype | int | DataType to be used for the output file | required |
Examples
>>> change_datatype("in.dfsu", "out.dfsu", datatype=107)concat
generic.concat(infilenames, outfilename, keep='last')Concatenates files along the time axis.
Overlap handling is defined by the keep argument, by default the last one will be used.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilenames | Sequence[str | pathlib.Path] | filenames to concatenate | required |
| outfilename | str | pathlib.Path | filename of output | required |
| keep | str | either ‘first’ (keep older), ‘last’ (keep newer) or ‘average’ can be selected. By default ‘last’ | 'last' |
Notes
The list of input files have to be sorted, i.e. in chronological order
diff
generic.diff(infilename_a, infilename_b, outfilename)Calculate difference between two dfs files (a-b).
extract
generic.extract(infilename, outfilename, start=0, end=-1, step=1, items=None)Extract timesteps and/or items to a new dfs file.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | path to input dfs file | required |
| outfilename | str | pathlib.Path | path to output dfs file | required |
| start | (int, float, str or datetime) | start of extraction as either step, relative seconds or datetime/str, by default 0 (start of file) | 0 |
| end | (int, float, str or datetime) | end of extraction as either step, relative seconds or datetime/str, by default -1 (end of file) | -1 |
| step | int | jump this many step, by default 1 (every step between start and end) | 1 |
| items | (int, list(int), str, list(str)) | items to be extracted to new file | None |
Examples
>>> extract('f_in.dfs0', 'f_out.dfs0', start='2018-1-1')
>>> extract('f_in.dfs2', 'f_out.dfs2', end=-3)
>>> extract('f_in.dfsu', 'f_out.dfsu', start=1800.0, end=3600.0)
>>> extract('f_hourly.dfsu', 'f_daily.dfsu', step=24)
>>> extract('f_in.dfsu', 'f_out.dfsu', items=[2, 0])
>>> extract('f_in.dfsu', 'f_out.dfsu', items="Salinity")
>>> extract('f_in.dfsu', 'f_out.dfsu', end='2018-2-1 00:00', items="Salinity")fill_corrupt
generic.fill_corrupt(infilename, outfilename, fill_value=np.nan, items=None)Replace corrupt (unreadable) data with fill_value, default delete value.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | full path to the input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
| fill_value | float | value to use where data is corrupt, default delete value | np.nan |
| items | Sequence[str | int] | None | Process only selected items, by number (0-based) or name, by default: all | None |
quantile
generic.quantile(
infilename,
outfilename,
q,
*,
items=None,
skipna=True,
buffer_size=1000000000.0,
)Create temporal quantiles of all items in dfs file.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| q | float | Sequence[float] | Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive. | required |
| items | Sequence[int | str] | int | str | None | Process only selected items, by number (0-based) or name, by default: all | None |
| skipna | bool | exclude NaN/delete values when computing the result, default True | True |
| buffer_size | float | for huge files the quantiles need to be calculated for chunks of elements. buffer_size gives the maximum amount of memory available for the computation in bytes, by default 1e9 (=1GB) | 1000000000.0 |
Examples
>>> quantile("in.dfsu", "IQR.dfsu", q=[0.25,0.75])>>> quantile("huge.dfsu", "Q01.dfsu", q=0.1, buffer_size=5.0e9)>>> quantile("with_nans.dfsu", "Q05.dfsu", q=0.5, skipna=False)scale
generic.scale(infilename, outfilename, offset=0.0, factor=1.0, items=None)Apply scaling to any dfs file.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | full path to the input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
| offset | float | value to add to all items, default 0.0 | 0.0 |
| factor | float | value to multiply to all items, default 1.0 | 1.0 |
| items | Sequence[int | str] | None | Process only selected items, by number (0-based) or name, by default: all | None |
sum
generic.sum(infilename_a, infilename_b, outfilename)Sum two dfs files (a+b).