Experimental parameter variance

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
fadl alakwa fadlmaster1 yahoo.co
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by fadl alakwa fadlmaster1 yahoo.co »

Posted by fadl alakwa <fadlmaster1@yahoo.com>

Hello every one

My model is of sixth order, 100 parameters and has two test inputs.

When I calibrate model parameters for different test input with the real data I have two sets of parameters.(as example a1=0.1 for TEST INPUT 1 but for TEST INPUT 2 a1=100).

This is happened for all parameters values in my model.

Is this mean that my model is wrong (the parameters should be the same for all test inputs )?then I must change the theory I based in building this model.


Fadl M. Ahmed
Posted by fadl alakwa <fadlmaster1@yahoo.com>
posting date Sat, 1 Oct 2005 21:22:04 -0700 (PDT)
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by Jean-Jacques Laublé jean-jacques »

Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr> Hello Fadl

Some things must be checked.
First in the test inputs.
I suppose there are several real time series corresponding to several variables of the model in each input test. Have the test inputs the same corresponding variables? Do these inputs come from the same 'kind of reality' and the same time period, or what was changing? If the two questions are true, the test inputs should have the same values and there is not need to calibrate with both of them. Do the test inputs look realistic independently of any modelling and can they represent the same reality independently of any modelling? Are the data reliable? If everything is correct one can look at the model.

About the model.
It is difficult to have an opinion on the model, without knowing it and without knowledge of the subject.

A remark: you have 100 parameters. To calibrate correctly 100 parameters, you should have many data to check them against. That is many time steps and many variables in each test inputs.

Another remark: do the parameters exhibiting a very different value for both calibrations have a strong influence on the data that are in the test inputs? You can have a big difference in the parameters that come from chance, if they have a small influence on the data you check.

Regards.
J.J. Laublé. Allocar
Strasbourg. France
Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr> posting date Sun, 2 Oct 2005 18:53:15 +0200
Raymond Joseph rtjoseph earthlin
Newbie
Posts: 1
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by Raymond Joseph rtjoseph earthlin »

Posted by ""Raymond Joseph"" <rtjoseph@earthlink.net>
Just another consideration:

Have the initial values of the model been set to match the initial conditions at the start of the data collection?

Ray
Posted by ""Raymond Joseph"" <rtjoseph@earthlink.net>
posting date Mon, 3 Oct 2005 18:15:58 -0500
Tom Fiddaman tom vensim.com
Junior Member
Posts: 9
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by Tom Fiddaman tom vensim.com »

Posted by Tom Fiddaman <tom@vensim.com>
Generically, if you can't reproduce your data with consistent parameter values, there is something wrong (or perhaps just missing) in your model. It's hard to say more without getting into specifics of the model, but I think the key question is *why* a1 has to be different in order to fit the two different sets of data.

An observation: 100 parameters seems like a lot for a 6th order model. If it were linear, there would be just a 6x6 matrix of coefficients plus 6 initial conditions - but it's unusual for a model to be fully connected (not every level influences the rate into every other level). In a typical model, I'd expect maybe 5 parameters per level (e.g. a 2-parameter nonlinear equation for the inflow and outflow of each level and an initial value) plus some extras to define exogenous inputs. Jack Homer's burnout model is 6th order and has 13 constants and 9 lookups - equivalent to 31 parameters if you replace each lookup with a 2-parameter nonlinear function. Nordhaus' DICE model has 11 levels (though it's effectively only 4th order) and 35 constants. Forrester's Urban Dynamics has 20 levels, 95 constants, and 53 lookups. I don't think parameter density is a very meaningful indicator, but when it's very low it may mean that a model has lots of buried dimensional inconsistencies, and when it's very high it could indicate redundancy or something else atypical

Tom
Posted by Tom Fiddaman <tom@vensim.com>
posting date Wed, 05 Oct 2005 13:51:09 -0600
Jean-Jacques Laublé jean-jacques
Senior Member
Posts: 68
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by Jean-Jacques Laublé jean-jacques »

Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr> Hi

The high number of parameters is not peculiar to this model. When I built most of my models, I experienced the same kind of difficulty. The influence diagram was easy to build, and looked credible, the equations were too relatively easy to build and were credible too, but finding the values for the parameters, and the exogenous data that represent the outside world, was of an incomparable difficulty.

It may come from my kind of models. But do the other people on this list, have the same difficulties?

There were too many parameters for the available real data to calibrate them. There was too a problem of independency of the parameters, that was uncertain and not sufficient. It is difficult to use sensitivity analysis with dependant parameters. Of course, it is always possible to resolve the independency by extending the boundaries of the model, and finding the common source of the dependant parameters, but the operation will add new parameters, and there is no proof that there is a state of model extension, where all the parameters exhibit a sufficient level of independency. So for the question, are the 100 parameters independent or is it really the smallest set of parameters imaginable for this problem? Regards. J.J. Laublé. Allocar Strasbourg France. Posted by =?iso-8859-1?Q?Jean-Jacques_Laubl=E9?= <jean-jacques.lauble@wanadoo.fr> posting date Wed, 12 Oct 2005 09:14:30 +0200
Rogelio Oliva roliva tamu.edu
Junior Member
Posts: 2
Joined: Fri Mar 29, 2002 3:39 am

Experimental parameter variance

Post by Rogelio Oliva roliva tamu.edu »

Posted by Rogelio Oliva <roliva@tamu.edu>
Fadl,

The following article contains a detailed discussion of the role of parameter calibration in model testing:

Oliva R. 2003. Model calibration as a testing strategy for system dynamics models. European Journal of Operational Research 151(3): 552-568. http://iops.tamu.edu/faculty/roliva/res ... ation.html

Abstract: System dynamics models are becoming increasingly common in the analysis of policy and managerial issues. The usefulness of these models is predicated on their ability to link observable patterns of behavior to micro-level structure and decision-making processes. This paper posits that model calibration ‚ the process of estimating the model parameters (structure) to obtain a match between observed and simulated structures and behaviors‚ is a stringent test of a hypothesis linking structure to behavior, and proposes a framework to use calibration as a form of model testing. It tackles the issue at three levels: theoretical, methodological, and technical. First, it explores the nature of model testing, and suggests that the modeling process be recast as an experimental approach to gain confidence in the hypothesis articulated in the model. At the methodological level, it proposes heuristics to guide the testing strategy, and to take advantage of the strengths of automated calibration algorithms. Finally, it presents a set of techniques to support the hypothesis testing process. The paper concludes with an example and a summary of the argument for the proposed approach.

I hope you find it useful,

Rogelio Oliva
---
Rogelio Oliva
Associate Professor | Ford Supply Chain Fellow
Mays Business School | 4217 TAMU | College Station, TX 77843-4217 Posted by Rogelio Oliva <roliva@tamu.edu> posting date Sat, 15 Oct 2005 17:45:41 -0500
Locked