Sampling Theory - Energy Cycle

This forum contains all archives from the SD Mailing list (go to http://www.systemdynamics.org/forum/ for more information). This is here as a read-only resource, please post any SD related questions to the SD Discussion forum.
Locked
Jurgen Hemme
Junior Member
Posts: 4
Joined: Fri Mar 29, 2002 3:39 am

Sampling Theory - Energy Cycle

Post by Jurgen Hemme »

Hi,
Im currently modeling organisational behaviour and Im looking for the
connection of sampling time (delta T) and systems stability.
What is the impact of increasing delta T on the systems stability. Im
using Routh Stability test but Im afraid it would not work in my case
because my delta T can be extremely high.
Any suggestions on that?

Secondly, im looking for a representation of the human energy cycle while
working. Is there something like a guideline or table format on that?

Thirdly, is there a generalised learning curve that represents how much the
individual person has aquired. For example: After 5 years working in th same
job one has aquired 95% off the total available skill level.

Fourthly, how can motivation be represented numerically?

Many thanks

Dipl. Ing. J|rgen Hemme
Research Fellow
Sheffield Hallam University
School of Engineering
Pond Street
Room 4L30

Sheffield
S1 1WB
United Kingdom

Phone: +44 (0)114 2253091
Fax: +44 (0)114 2253433
e-mail: j.hemme@shu.ac.uk
"Raymond T. Joseph"
Junior Member
Posts: 11
Joined: Fri Mar 29, 2002 3:39 am

Sampling Theory - Energy Cycle

Post by "Raymond T. Joseph" »

Sample rate has connections to the rate of change of information. The
Nyquist sampling theorem says that you need to sample at twice the highest
frequency component of the information you are analyzing. If you have data
that has high frequency content but the desired information is represented
by only a lower portion of the total, then you have to filter the data
before you extract the information. In order to filter the data, it still
must be sampled at twice the highest frequency content of the signal
including noise. If it is sampled at a lower rate, high frequency
components will be aliased into the lower frequency band and you wont be
able to tell the noise from the information.

So you need to determine the maximum frequency of the signal before you
sample it. Sample at twice the highest frequency component. Then use a
digital filter to remove the high frequency noise. Now you can consider
that you have all the information needed. If you use a sample time which is
larger than that required by the signal, you may have a stable system, but
it will be aliasing high frequency components into the lower bands which
corrupts that actual information.

Now lets say that you use the fully sampled data to calculate the system
model (or its parameters). The system itself will have time constants which
filter the data. Typically, the overall system will have a limited
frequency response. That is, above some frequency, the system no longer has
a (significant) response. You can now be assured that sampling at twice
this frequency will fully describe the system (assuming a linear system).

Now you have to look back at the data I/O of the system. The system acts as
a digital filter, if you feed it data that has frequency components higher
than that for which your sampling can account for, the system will alias
this data. It may still be stable, but the information is corrupt. So the
data must be filtered before sampling process that feeds the system.

In real systems, there are typically time delays. The time delays also
effect the sampling requirements. They do so as a function of the other
system time constants. This depends on the topology of the system and thus
needs to be addressed case by case. And the Routh Stability criteria is
only valid for minimum phase cases, not systems with time delays.

The third question is interesting in that is suggests that in order to
measure the percent accomplished, that someone knows everything that is
required to perform a task. This may be valid for menial tasks, but for
knowledge workers, there is no end; there are always more ways resolve
issues. This doesnt say that the problem is unsolvable, just that it may
require a different formulation. Rather than look for a percentage of
knowledge required, it might be more appropriate to look for the
performance increase with respect to acquired knowledge.

The fourth issue may be looked at similarly. Motivation may be an
efficiency factor. Given resources and a load, how efficient is the
transformation of resources to satisfaction of the load.

Ray
From: "Raymond T. Joseph" <
rjoseph@wt.net>
Locked