CSERD


General Background


Shodor > CSERD > Help > Validation > General Background

The validation process is more complicated than the verification process in that the reviewer needs to have a greater amount of subject area expertise. Each subject field and even parts of the same general field will have it own specialized vocabulary and knowledge concerning acceptable approaches to modeling and simulation of different subsystems. Different types of materials also require different criteria for their review. For example, a mathematical model of a biological system will be judged against standards of correctness for modeling that system while a reference dataset in biology, such as one for genetics, will be judged based on its accuracy, currency, and ease of extracting the desired data.

Fundamentally, a valid item must pass four tests: coherence, credibility, organization, and relevance. Coherence implies that the knowledge represented by the model "fits in" with known information. A coherent application of a model should be able to reproduce results of experiments within the model's domain of applicability. If theories are allowed to, indeed required to, select their own evidence and then to give meaning and credibility to the observations, the testing process seems to be unavoidably circular and self-serving. Organization lies in the models proper place in a methodology of scientific practice. Observations lead to concepts that lead to constructs that lead to principles that lead to theories. Models and simulations should make use of this structure. Relevance is the ability to retrieve material that satisfies the needs of the user. A scientist might more aptly think of this as a causality relation: something may be observed but that observation is not connected to the problem.

Validation is complicated by the fact that while it is possible for an invalid model to be verified, it is not possible for a false model to be valid. For many models which are false, the symptomatic result will be that the model will not pass the validation test. It is not immediately obvious whether a model which fails the test of coherence does so due to problems with the model itself or how the model is applied. A model that is not properly justified (e.g. has bugs) may not appear false until coherence tests are applied. On the other hand, a fully verified model may fail coherence due to the fact that the model is wrong!

For example, consider a model of a falling object which is properly coded but contains an incorrect value for the acceleration due to gravity. The code is based on sound mathematical logic, is coded properly except for the value of gravity, and runs on all platforms tested. The results qualitatively agree with what one expects, however, when tested against experimental data, the code fails to predict the results of experiment. Thus a model which was improperly verified fails the validation test of coherence.

As a counter example, consider early models of electromagnetic wave propagation, in particular the model of the ether. A (at the time) reasonable assumption was made that waves could not propagate without a medium, so a model in which EM waves traveled through an immeasurable medium was devised. This model was based on known physical laws at the time, in which EM energy showed wave-like effects, and self-consistent predictions were justified stating that interferometry experiments should be able to measure the motion of the Earth through the ether. An experiment known as the Michelson-Morley experiment was devised to do this: the result of the experiment was that there was no effect due to the ether. The fully verified model of "ether" was shown to be incoherent with other known scientific data, in particular the Michelson-Morley experiment, leading to the eventual rejection of the model of the ether.

When invalidating a model, one should strive to determine whether the model is invalidated due to the model, or to a previously undiscovered problem with verification.


©1994-2024 Shodor