Terminology
ModelSkill is a library for assessing the skill of numerical models. It provides tools for comparing model results with observations, plotting the results and calculating validation metrics. This page defines some of the key terms used in the documentation.
Skill
Skill refers to the ability of a numerical model to accurately represent the real-world phenomenon it aims to simulate. It is a measure of how well the model performs in reproducing the observed system. Skill can be assessed using various metrics, such as accuracy, precision, and reliability, depending on the specific goals of the model and the nature of the data. In ModelSkill, skill
is also a specific method on Comparer objects that returns a SkillTable
with aggregated skill scores per observation and model for a list of selected metrics.
Validation
Validation is the process of assessing the model's performance by comparing its output to real-world observations or data collected from the system being modeled. It helps ensure that the model accurately represents the system it simulates. Validation is typically performed before the model is used for prediction or decision-making.
Calibration
Calibration is the process of adjusting the model's parameters or settings to improve its performance. It involves fine-tuning the model to better match observed data. Calibration aims to reduce discrepancies between model predictions and actual measurements. At the end of the calibration process, the calibrated model should be validated with independent data.
Performance
Performance is a measure of how well a numerical model operates in reproducing the observed system. It can be assessed using various metrics, such as accuracy, precision, and reliability, depending on the specific goals of the model and the nature of the data. In this context, performance is synonymous with skill.
Timeseries
A timeseries is a sequence of data points in time. In ModelSkill, The data can either be from observations or model results. Timeseries can univariate or multivariate; ModelSkill primarily supports univariate timeseries. Multivariate timeseries can be assessed one variable at a time. Timeseries can also have different spatial dimensions, such as point, track, line, or area.
Observation
An observation refers to real-world data or measurements collected from the system you are modeling. Observations serve as a reference for assessing the model's performance. These data points are used to compare with the model's predictions during validation and calibration. Observations are usually based on field measurements or laboratory experiments, but for the purposes of model validation, they can also be derived from other models (e.g. a reference model). ModelSkill supports point and track observation types.
Measurement
A measurement is called observation in ModelSkill.
Model result
A model result is the output of any type of numerical model. It is the data generated by the model during a simulation. Model results can be compared with observations to assess the model's performance. In the context of validation, the term "model result" is often used interchangeably with "model output" or "model prediction". ModelSkill supports point, track, dfsu and grid model result types.
Metric
A metric is a quantitative measure (a mathematical expression) used to evaluate the performance of a numerical model. Metrics provide a standardized way to assess the model's accuracy, precision, and other attributes. A metric aggregates the skill of a model into a single number. See list of metrics supported by ModelSkill.
Score
A score is a numerical value that summarizes the model's performance based on chosen metrics. Scores can be used to rank or compare different models or model configurations. In the context of validation, the "skill score" or "validation score" often quantifies the model's overall performance. The score of a model is a single number, calculated as a weighted average for all time-steps, observations and variables. If you want to perform automated calibration, you can use the score as the objective function. In ModelSkill, score
is also a specific method on Comparer objects that returns a single number aggregated score using a specific metric.
Matched data
In ModelSkill, observations and model results are matched when they refer to the same positions in space and time. If the observations and model results are already matched, the from_matched
function can be used to create a Comparer directly. Otherwise, the match function can be used to match the observations and model results in space and time.
match()
The function match
is used to match a model result with observations. It returns a Comparer
object or a ComparerCollection
object.
Comparer
A Comparer is an object that stores the matched observation and model result data for a single observation. It is used to calculate validation metrics and generate plots. A Comparer can be created using the match
function.
ComparerCollection
A ComparerCollection is a collection of Comparers. It is used to compare multiple observations with one or more model results. A ComparerCollection can be created using the match
function or by passing a list of Comparers to the ComparerCollection
constructor.
Connector
In past versions of FMSkill/ModelSkill, the Connector class was used to connect observations and model results. This class has been deprecated and is no longer in use.
Abbreviations
Abbreviation | Meaning |
---|---|
ms |
ModelSkill |
o or obs |
Observation |
mr or mod |
Model result |
cmp |
Comparer |
cc |
ComparerCollection |
sk |
SkillTable |
mtr |
Metric |
q |
Quantity |