One size rarely fits all.
When working with decision makers, who are typically very action and
results oriented, multimedia presentations divert focus and consume
time.
When working with individuals in learning situations, increasing the
number of senses involved enhances retention.
-----------------------------------------------------------------
Bob Blyth
Department of Energy-Idaho Operations Office 208-526-1181 phone
850 Energy Drive, MS 1154 208-526-7245 fax
Idaho Falls, ID 83402 blythrl@inel.gov email
--------------------------------------------------------------------
In defense of "bells & whistles"
-
- Junior Member
- Posts: 2
- Joined: Fri Mar 29, 2002 3:39 am
-
- Junior Member
- Posts: 16
- Joined: Fri Mar 29, 2002 3:39 am
In defense of "bells & whistles"
>From Phil Odence, HPS:
I will be the last person to argue for form over substance, and the first
to admit that one can easily go overboard with "bells & whistles."
However, from having been involved in building a number of Learning
Environments and observing people interacting with them, I am convinced
that a judicious use of multi-media technology can greatly enhance a
learning experience.
The importance of some multi-media upfront is to engage learners and to
put them in a mindset conducive to learning. It is easy for learners to
remain aloof of the "silly" game, or to approach it too clinically,
focusing on reverse engineering the thinking of the creators. Interactive
simulation is most effective when learners allow themselves to be drawn
into the fiction, and therefore to be vested in the fictitious outcomes.
You want heart rates to climb a bit when profits sag. Under such
circumstances the lessons of the simulation are driven deeper into the
viscera and stick better.
Some carefully-crafted, multi-media "realism" early in the experience
invites learners to join in the fiction, and thereby to learn better. I
have seen it; it works.
***************************************************************************
*
Phil Odence
podence@hps-inc.com
High Performance Systems
45 Lyme Road, Suite 300
Hanover, NH 03755
voice- 603 643 9636 x107, fx- 603 643 9502, web- http://www.hps-inc.com
I will be the last person to argue for form over substance, and the first
to admit that one can easily go overboard with "bells & whistles."
However, from having been involved in building a number of Learning
Environments and observing people interacting with them, I am convinced
that a judicious use of multi-media technology can greatly enhance a
learning experience.
The importance of some multi-media upfront is to engage learners and to
put them in a mindset conducive to learning. It is easy for learners to
remain aloof of the "silly" game, or to approach it too clinically,
focusing on reverse engineering the thinking of the creators. Interactive
simulation is most effective when learners allow themselves to be drawn
into the fiction, and therefore to be vested in the fictitious outcomes.
You want heart rates to climb a bit when profits sag. Under such
circumstances the lessons of the simulation are driven deeper into the
viscera and stick better.
Some carefully-crafted, multi-media "realism" early in the experience
invites learners to join in the fiction, and thereby to learn better. I
have seen it; it works.
***************************************************************************
*
Phil Odence
podence@hps-inc.com
High Performance Systems
45 Lyme Road, Suite 300
Hanover, NH 03755
voice- 603 643 9636 x107, fx- 603 643 9502, web- http://www.hps-inc.com
-
- Senior Member
- Posts: 54
- Joined: Fri Mar 29, 2002 3:39 am
In defense of "bells & whistles"
Ive read the discussion of the role of bells and whistles (e.g.
multimedia) in management flight simulators and computer-based learning
environments with interest. There are good arguments on both sides as
to whether they are useful or not, or detract from the learning
experience or not. And of course there are many purposes for which
flight simulators and computer based learning environments can be used;
for each different purpose the utility of bells and whistles may differ.
What is most striking, however, is the utter lack of serious research to
test these hypotheses.
At the moment we have people who dont use bells and whistles advocating
that they are not helpful or even harmful, and those who do use bells
and whistles in their learning environments arguing for their proper
role. We have anecdotes from users about what appears to work for them
and what doesnt.
What we dont have is any research which provides a robust, replicable
assessment of what works in what circumstances with what type of
learners and for what purpose. We dont have any way to determine if
the success or failure of a particular tool or approach is due to its
design, or placebo, Hawthorn, guru, or demand effects, or a thousand
other confounding factors.
As someone who has done some research of in this area, I know that such
assessment is extremely difficult. It is difficult logistically and
technically. It is a long-term, multifaceted activity. Most difficult
of all it requires us to become explicit about what we mean by learning
and what outcomes, over what time frame, for what group of people, we
consider important.
Yet only by undertaking this research will we move beyond the unreliable
and anecdotal; only by undertaking such research will be be able to
build a cumulative body of reliable, useful knowledge, knowledge which
better enables us to learn about the awesome complexity of the systems
in which we live and whose obscure dynamics must be better understood if
we are to achieve our deepest aspirations, or even survive as a species.
There is a long history of educational innovations which yield early
successes but fail to replicate or transfer to a broad base. Without
serious assessment research, and the discipline it imposes on designers,
consultants, and users, I see no reason to believe that learning
laboratories, management flight simulators, learning environments,
organizational learning, or whatever one calls these tools and processes
will not suffer a similar fate. Relying on the market or workshop
ratings or developer intuition or anecdotes to evaluate what works and
what doesnt will lead to superstitious learning. Anecdotes will always
be biased towards the successful and to attributions that success was
due to the intervention or intervenor, and not other factors, while
learning requires us also to examine the failures. One need only look
at the enduring demand for astrology, magic, quack remedies, UFO and
Elvis sightings, economic forecasters, crystal power and other nonsense
to see that the marketplace does not provide a strong test of useful,
reliable knowledge. (A pessimist, which I am not, can make a credible
case that the marketplace systematically rewards the quacks and
charlatans and punishes those seeking to apply scientific method to sort
the wheat from the chaff.)
Who out there knows of any research on the effectiveness of these and
other tools and techniques for teaching system dynamics or
organizational learning? Who is conducting such research? Who among
the users or developers is willing to participate in such research? If
not the system dynamics community, who? If not now, when?
John Sterman
Sloan School of Management
MIT, E53-351
30 Wadsworth Street
Cambridge, MA 02142
phone: 617/253-1951 fax: 617/258-7579 e-mail: jsterman@mit.edu
multimedia) in management flight simulators and computer-based learning
environments with interest. There are good arguments on both sides as
to whether they are useful or not, or detract from the learning
experience or not. And of course there are many purposes for which
flight simulators and computer based learning environments can be used;
for each different purpose the utility of bells and whistles may differ.
What is most striking, however, is the utter lack of serious research to
test these hypotheses.
At the moment we have people who dont use bells and whistles advocating
that they are not helpful or even harmful, and those who do use bells
and whistles in their learning environments arguing for their proper
role. We have anecdotes from users about what appears to work for them
and what doesnt.
What we dont have is any research which provides a robust, replicable
assessment of what works in what circumstances with what type of
learners and for what purpose. We dont have any way to determine if
the success or failure of a particular tool or approach is due to its
design, or placebo, Hawthorn, guru, or demand effects, or a thousand
other confounding factors.
As someone who has done some research of in this area, I know that such
assessment is extremely difficult. It is difficult logistically and
technically. It is a long-term, multifaceted activity. Most difficult
of all it requires us to become explicit about what we mean by learning
and what outcomes, over what time frame, for what group of people, we
consider important.
Yet only by undertaking this research will we move beyond the unreliable
and anecdotal; only by undertaking such research will be be able to
build a cumulative body of reliable, useful knowledge, knowledge which
better enables us to learn about the awesome complexity of the systems
in which we live and whose obscure dynamics must be better understood if
we are to achieve our deepest aspirations, or even survive as a species.
There is a long history of educational innovations which yield early
successes but fail to replicate or transfer to a broad base. Without
serious assessment research, and the discipline it imposes on designers,
consultants, and users, I see no reason to believe that learning
laboratories, management flight simulators, learning environments,
organizational learning, or whatever one calls these tools and processes
will not suffer a similar fate. Relying on the market or workshop
ratings or developer intuition or anecdotes to evaluate what works and
what doesnt will lead to superstitious learning. Anecdotes will always
be biased towards the successful and to attributions that success was
due to the intervention or intervenor, and not other factors, while
learning requires us also to examine the failures. One need only look
at the enduring demand for astrology, magic, quack remedies, UFO and
Elvis sightings, economic forecasters, crystal power and other nonsense
to see that the marketplace does not provide a strong test of useful,
reliable knowledge. (A pessimist, which I am not, can make a credible
case that the marketplace systematically rewards the quacks and
charlatans and punishes those seeking to apply scientific method to sort
the wheat from the chaff.)
Who out there knows of any research on the effectiveness of these and
other tools and techniques for teaching system dynamics or
organizational learning? Who is conducting such research? Who among
the users or developers is willing to participate in such research? If
not the system dynamics community, who? If not now, when?
John Sterman
Sloan School of Management
MIT, E53-351
30 Wadsworth Street
Cambridge, MA 02142
phone: 617/253-1951 fax: 617/258-7579 e-mail: jsterman@mit.edu