from mikeio import generic
generic.diff("../data/oresundHD_run1.dfsu", "../data/oresundHD_run2.dfsu", "diff.dfsu"); 0%| | 0/5 [00:00<?, ?it/s]100%|██████████| 5/5 [00:00<00:00, 2645.25it/s]
generic
Generic functions for working with all types of dfs files.
| Name | Description |
|---|---|
| add | Add two dfs files (a+b). |
| avg_time | Create a temporally averaged dfs file. |
| change_datatype | Change datatype of a DFS file. |
| concat | Concatenates files along the time axis. |
| diff | Calculate difference between two dfs files (a-b). |
| extract | Extract timesteps and/or items to a new dfs file. |
| fill_corrupt | Replace corrupt (unreadable) data with fill_value, default delete value. |
| quantile | Create temporal quantiles of all items in dfs file. |
| scale | Apply scaling to any dfs file. |
| transform | Transform a dfs file by applying functions to items. |
Add two dfs files (a+b).
Create a temporally averaged dfs file.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| skipna | bool | exclude NaN/delete values when computing the result, default True | True |
Change datatype of a DFS file.
The data type tag is used to classify the file within a specific modeling context, such as MIKE 21. There is no global standard for these tags—they are interpreted locally within a model setup.
Application developers can use these tags to classify files such as bathymetries, input data, or result files according to their own conventions.
Default data type values assigned by MikeIO when creating new files are: - dfs0: datatype=1 - dfs1-3: datatype=0 - dfsu: datatype=2001
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| datatype | int | DataType to be used for the output file | required |
Concatenates files along the time axis.
Overlap handling is defined by the keep argument, by default the last one will be used.
| Name | Type | Description | Default |
|---|---|---|---|
| infilenames | Sequence[str | pathlib.Path] | filenames to concatenate | required |
| outfilename | str | pathlib.Path | filename of output | required |
| keep | str | either ‘first’ (keep older), ‘last’ (keep newer) or ‘average’ can be selected. By default ‘last’ | 'last' |
The list of input files have to be sorted, i.e. in chronological order
Calculate difference between two dfs files (a-b).
| Name | Type | Description | Default |
|---|---|---|---|
| infilename_a | str | pathlib.Path | full path to the first input file | required |
| infilename_b | str | pathlib.Path | full path to the second input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
Extract timesteps and/or items to a new dfs file.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | path to input dfs file | required |
| outfilename | str | pathlib.Path | path to output dfs file | required |
| start | (int, float, str or datetime) | start of extraction as either step, relative seconds or datetime/str, by default 0 (start of file) | 0 |
| end | (int, float, str or datetime) | end of extraction as either step, relative seconds or datetime/str, by default -1 (end of file) | -1 |
| step | int | jump this many step, by default 1 (every step between start and end) | 1 |
| items | (int, list(int), str, list(str)) | items to be extracted to new file | None |
>>> extract('f_in.dfs0', 'f_out.dfs0', start='2018-1-1')
>>> extract('f_in.dfs2', 'f_out.dfs2', end=-3)
>>> extract('f_in.dfsu', 'f_out.dfsu', start=1800.0, end=3600.0)
>>> extract('f_hourly.dfsu', 'f_daily.dfsu', step=24)
>>> extract('f_in.dfsu', 'f_out.dfsu', items=[2, 0])
>>> extract('f_in.dfsu', 'f_out.dfsu', items="Salinity")
>>> extract('f_in.dfsu', 'f_out.dfsu', end='2018-2-1 00:00', items="Salinity")Replace corrupt (unreadable) data with fill_value, default delete value.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | full path to the input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
| fill_value | float | value to use where data is corrupt, default delete value | np.nan |
| items | Sequence[str | int] | None | Process only selected items, by number (0-based) or name, by default: all | None |
Create temporal quantiles of all items in dfs file.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | input filename | required |
| outfilename | str | pathlib.Path | output filename | required |
| q | float | Sequence[float] | Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive. | required |
| items | Sequence[int | str] | int | str | None | Process only selected items, by number (0-based) or name, by default: all | None |
| skipna | bool | exclude NaN/delete values when computing the result, default True | True |
| buffer_size | float | for huge files the quantiles need to be calculated for chunks of elements. buffer_size gives the maximum amount of memory available for the computation in bytes, by default 1e9 (=1GB) | 1000000000.0 |
Apply scaling to any dfs file.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | full path to the input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
| offset | float | value to add to all items, default 0.0 | 0.0 |
| factor | float | value to multiply to all items, default 1.0 | 1.0 |
| items | Sequence[int | str] | None | Process only selected items, by number (0-based) or name, by default: all | None |
Transform a dfs file by applying functions to items.
| Name | Type | Description | Default |
|---|---|---|---|
| infilename | str | pathlib.Path | full path to the input file | required |
| outfilename | str | pathlib.Path | full path to the output file | required |
| vars | Sequence[DerivedItem] | List of derived items to compute. | required |
| keep_existing_items | bool | If True, existing items in the input file will be kept in the output file. If False, only the derived items will be written to the output file. Default is True. | True |
import numpy as np
import mikeio
from mikeio.generic import DerivedItem, transform
item = DerivedItem(
name="Current Speed",
type=mikeio.EUMType.Current_Speed,
func=lambda x: np.sqrt(x["U velocity"] ** 2 + x["V velocity"] ** 2),
)
transform(
infilename="../data/oresundHD_run1.dfsu",
outfilename="out.dfsu",
vars=[item],
keep_existing_items=False,
)