Getting started

This page describes a simple ModelSkill workflow when model results and observations are already matched. See workflow page for a more elaborate workflow.

Installation

pip install modelskill

Skill assessment

The simplest use-case for skill assessment is when you have a dataset of matched model results and observations in tabular format.

import pandas as pd
import modelskill as ms
df = pd.read_csv("../data/Vistula/sim1/6158100.csv", parse_dates=True, index_col="Date")
df.head()
Qobs Qsim Prec
Date
2000-01-02 5.2 4.641 0.11
2000-01-03 5.2 4.666 0.05
2000-01-04 5.2 4.556 0.72
2000-01-05 5.2 4.470 0.30
2000-01-06 5.2 4.391 1.38
cmp = ms.from_matched(df, obs_item="Qobs", mod_items="Qsim", quantity=ms.Quantity("Discharge", "m3/s"))
cmp
<Comparer>
Quantity: Discharge [m3/s]
Observation: Qobs, n_points=3653
Model(s):
0: Qsim

A time series plot is a common way to visualize the comparison.

cmp.plot.timeseries()

Another more quantitative way to analyze the compared data is to use a scatter plot, which optionally includes a skill table (Definition of the metrics).

cmp.plot.scatter(skill_table=True)

The skill table can also be produced in tabular format, including specifing other metrics.

cmp.skill(metrics=["bias", "mae", "rmse", "kge", "si"])
n bias mae rmse kge si
observation
Qobs 3653 -5.303471 7.473344 14.903921 0.360125 0.866969