Two comments to Eliot Rich's submission, one short, another
long, but hopefully interesting to the SD community.
As an appetizer to the comments: The SD community belongs
to what Dr Howard F Lipson at CERT Coordination Center
calls a ""benign community of researchers and educators"".
Lipson's point is that the internet was designed for such
communities and therefore security was not a concern.
Accordingly, the internet is a system that never was
designed with security in mind, implying that whatever has
been done after computed scientists discovered that there
really are bad guys around is curing symptoms, rather than
solving the fundamental problem. (See H.F. Lipson Tracking
and Tracing Cyber-Attacks: Technical Challenges and Global
Policy Issues. Special Report CMU/SEI-2002-SR-009.)
Comment no. 1: The monograph Gonzalez, Jose J., ed. 2003.
From Modeling to Managing Security: A System Dynamics
Approach. Vol. 35, Research Series. Kristiansand, Norway:
Norwegian Academic Press
is best obtained directly from Norwegian Academic Press
(USD 30 post-paid). Just email
bestilling@hoyskoleforlaget.no or send a fax to +47- 38 10
50 01
Comment no. 2: In addition to the paper by Andersen et al.
that Eliot refers to, the following one (also presented at
the International System Dynamics Conference in Oxford)
Wiik, Johannes, Jose J Gonzalez, Howard F. Lipson, and
Timothy J. Shimeall. 2004. Modeling the Lifecycle of
Software-based Vulnerabilities. Proceedings of the 22nd
International Conference of the System Dynamics Society
July 20-24., at Oxford, UK.
discusses a simple system dynamics model that is able to
explain the basic behavior of single cyber-attacks based on
software vulnerabilities. The behavior itself is documented
in:
Arbaugh, William A, William L Fithen, and John McHugh.
2000. Windows of Vulnerability: A Case Study Analysis.
Computer 33 (12):52-59.
A ""software vulnerability"" is a known bug that can be
exploited by hackers, criminals or terrorist to attack
hosts in the internet. There are an estimated 5-15 such
bugs per 1000 lines of code in commercial software -
Windows XP has more than 50 millions of code, other systems
are of comparable complexity and equally buggy. So there
are thousands and thousands of undiscovered
vulnerabilities.
When hackers discover one such vulnerability, they lunch
attacks, the bug becomes known and Microsoft programmers
(or the Linux community) develop a ""patch"" that users can
employ to eliminate the bug. Most users don't patch.
When ""white hat hackers"" (goods guys) discover a bug, and a
patch is developed before the hackers know about the
vulnerability, hackers reverse engineer the patch to learn
about the vulnerability itself and then attack hosts in the
internet, knowing by experience that most systems are not
patched. It is documented that such attacks against
unprotected hosts can go on for 2 years and more. Probably
they could go on indefinitely, but hackers get bored and
look for new vulnerabilities.
""Attack"" does not mean manual attack: You can automate
anything with computers, such as finding hosts in the
internet, attacking them, installing malicious software,
deleting software, crashing computers, etc. Lipson argues
that software weapons are becoming increasingly
sophisticated while the average know-how required to
utilize such weapons is decreasing all the time. In fact,
even the discovery of vulnerabilities can be increasingly
automated (by incessantly testing the response of systems
of different queries). Quoting Lipson again ""The expertise
of the average system administrator continues to decline.""
Originally, system administrators were few and expert. Then
more and more networks were established and expertise
started to decline. The recent trend is a very rapid
increase of personal networks, owned by amateurs who don't
know much and don't care at all about security.
Imagine what would happen if a determined and wealthy enemy
would assemble several hundreds of highly gifted computer
scientists in some place as a pro-forma company, calling it
The Really Useful Computer Co. (or something equally
innocuous), let them discover 14*X vulnerabilities (X being
a number of bugs larger than the MS programmers can deal
with per day) and then launch 14 waves of attacks on the
internet for 14 consecutive days. (By the way, such attacks
can be launched remotely and without traces, so that nobody
knows where the attacks originate.) In fact, because there
are sufficient number of hosts that are not protected
against known vulnerability attacks, one could skip the
discovery of unknown bugs and just concentrate on
overwhelming the defenses by launching simultaneously, say,
hundreds of known attacks.
Is the threat realistic - nobody knows for sure because
nobody has done a serious study of such scenario (at least
not in a way known to the public). It would be fun to
develop a system dynamics model - even if much of the
needed data is not well known, some estimates can be made
of the important parameters. Anybody interested in joining
forces?
Jose J. Gonzalez
Leader Security SIG
Email:
Jose.J.Gonzalez@hia.no
Home page:
http://ikt.hia.no/josejg/