Comparer class for comparing model and observation data.
The Comparer class is the main class of the ModelSkill package. It is returned by match(), from_matched() or as an element in a ComparerCollection. It holds the matched observation and model data for a single observation and has methods for plotting and skill assessment.
group by, by default [“model”] - by column name - by temporal bin of the DateTimeIndex via the freq-argument (using pandas pd.Grouper(freq)), e.g.: ‘freq:M’ = monthly; ‘freq:D’ daily - by the dt accessor of the DateTimeIndex (e.g. ‘dt.month’) using the syntax ‘dt:month’. The dt-argument is different from the freq-argument in that it gives month-of-year rather than month-of-data.
None
metrics
list
list of modelskill.metrics, by default modelskill.options.metrics.list
None
Returns
Name
Type
Description
SkillTable
skill assessment object
See also
sel a method for filtering/selecting data
Examples
>>>import modelskill as ms>>> cc = ms.match(c2, mod)>>> cc['c2'].skill().round(2) n bias rmse urmse mae cc si r2observationc2 113-0.000.350.350.290.970.120.99
>>> cc['c2'].skill(by='freq:D').round(2) n bias rmse urmse mae cc si r22017-10-2772-0.190.310.250.260.480.120.982017-10-280 NaN NaN NaN NaN NaN NaN NaN2017-10-29410.330.410.250.360.960.060.99
Aggregated spatial skill assessment of model(s) on a regular spatial grid.
Parameters
Name
Type
Description
Default
bins
int
criteria to bin x and y by, argument bins to pd.cut(), default 5 define different bins for x and y a tuple e.g.: bins = 5, bins = (5,[2,3,5])
5
binsize
float
bin size for x and y dimension, overwrites bins creates bins with reference to round(mean(x)), round(mean(y))
None
by
(str, List[str])
group by column name or by temporal bin via the freq-argument (using pandas pd.Grouper(freq)), e.g.: ‘freq:M’ = monthly; ‘freq:D’ daily by default [“model”,“observation”]
None
metrics
list
list of modelskill.metrics, by default modelskill.options.metrics.list
None
n_min
int
minimum number of observations in a grid cell; cells with fewer observations get a score of np.nan
None
Returns
Name
Type
Description
SkillGrid
skill assessment as a SkillGrid object
See also
skill a method for aggregated skill assessment
Examples
>>>import modelskill as ms>>>cmp= ms.match(c2, mod) # satellite altimeter vs. model>>>cmp.gridded_skill(metrics='bias')<xarray.Dataset>Dimensions: (x: 5, y: 5)Coordinates: observation 'alti'* x (x) float64 -0.4361.5433.5175.4927.466* y (y) float64 50.651.6652.753.7554.8Data variables: n (x, y) int32 300143717503672 ... 0015200002876 bias (x, y) float64 -0.02626 nan nan ... nan 0.06785-0.1143
>>> gs = cc.gridded_skill(binsize=0.5)>>> gs.data.coordsCoordinates: observation 'alti'* x (x) float64 -1.5-0.50.51.52.53.54.55.56.57.5* y (y) float64 51.552.553.554.555.556.5
score
Comparer.score(metric=mtr.rmse, **kwargs)
Model skill score
Parameters
Name
Type
Description
Default
metric
list
a single metric from modelskill.metrics, by default rmse
mtr.rmse
Returns
Name
Type
Description
dict[str, float]
skill score as a single number (for each model)
See also
skill a method for skill assessment returning a pd.DataFrame
Examples
>>>import modelskill as ms>>>cmp= ms.match(c2, mod)>>>cmp.score(){'mod': 0.3517964910888918}