Simple Expert System Applet

This may take some time to load. Works with Netscape 3.01 and higher.

Instructions ----- Run Expert System ----- Download Expert Systems Read more about Expert Systems
Simple Expert

Hmmm... You might not have Java installed!. Sorry.


There may be some problems with redrawing scrollbars after loading new knowledge bases.
Resize the window after loading to 'fix' this.
Instructions

This is a simple expert system. It works on knowledge bases that consist of propositions.

To make a consultation, pick a knowledge base from the menu at the top left of the screen. This will load the knowledge base into the higher of the two large text boxes. You can use the scrollbars to see the entire knowledge base.

Then press the Consult button. There will be a brief delay while the rest of the expert system is loaded, and a question will be asked at the bottom of the screen. IT will also appear in the lower of the two text boxes. You answer the questions by pressing the True and False buttons. If you press the Why button, a listing of the chain of rules requiring an answer to this question will be displayed in the lower text box. You can scroll through this if you wish.

After a consultation you can copy and paste the results into a local text editor.


Return to Expert System
Using your own Knowledge bases

This program is implimented as a Java applet. This has a number of advantages, but one of the disadvantages is that you are not permitted to load and save files. If you want to provide your own knowledge base you must either type it into the first large text field (with scroll bars), or write it in a text editor or word processor and copy and paste it into this box.

The syntax for the knowledge bases is pretty simple. Only the first word on a line is significant. The remainder of the line is a proposition, which can take any form that you wish. The only important thing is that each use of this proposition must be identical. This means IDENTICAL. Little things to you are mountain ranges to the program. This means spelling, punctuation, number of spaces, and the case of letters (eg lower and UPPER).

A Knowledge base consists of a title and a number of rules. Rules are separated by blank lines. Blank lines may not appear in rules.

The title is simply the first line of the knowledge base. It is not optional. It is followed by a blank line.

A rule consists of a Condition section and an Action section. The Condition section is one or more lines that must be satisfied for the Action section to activate. These list propositions whose truth may be be already known, or can be determined by another rule. If neither of these are the case, the expert system will ask the user.

The Action section sets propositions to true or false, issue hypotheses, and conclude the consultation.

The recognized keywords (things that start a line) are:

The Condition section. The Condition section of a rule must consist of a number of propositions that must be true for the rule to 'fire' or execute. These are indicated in UPPERCASE in the following list, but may be entered in lowercase if you wish.

IF
IF X, where X is proposition. Begins a rule.
AND
AND X. Used following a line beginning with IF to form complex conditions. Can be repeated for each subsequend condition.
IFNOT
IFNOT X. Condition is true if X is false. Begins a rule.
ANDNOT
ANDNOT X. Used following a line beginning with IF to form complex conditions. True if X is false. Can be repeated for each subsequend condition.
The Action section. The action section of a rule is composed of a number of propositions that are set to true or false, depending on the keyword. This section also includes keywords that terminate the consultation.
THEN
THEN X. X is asserted as true. THEN begins the Action section
ANDTHEN
ANDTHEN X. X is asserted as true. ANDTHEN continues the Action section. There can be any number of ANDTHENs in the action section.
THENNOT
THENNOT X. X is asserted as false. THENNOT begins the Action section
ANDTHENNOT
ANDTHENNOT X. X is asserted as false. ANDTHENNOT continues the Action section. There can be any number of ANDTHENNOTs in the action section.
THENHYP
THENHYP X. X is asserted as a true hypothesis, and the consultation will end if no other HYPs can be satisfied. THENHYP begins the Action section
ANDTHENHYP
ANDTHENHYP X. X is asserted as a true hypothesis. The consultation will end if no other HYPs can be satisfied. ANDTHENHYP continues the Action section. There can be any number of ANDTHENHYPs in the action section.
CONCLUDE
CONCLUDE X. X is asserted as true, and the consultation ends. Conclude must end a rule.
Example:
Gilbertese Weather Forcasting

if the crab blocks the mouth of the hole
and the crab scratches the sand flat
then there will be wind and rain within two days
andthen rain
andthen wind

ifnot the crab blocks the mouth of the hole
and the crab scratches the sand flat
then there will be strong wind and no rain within two days
andthen wind
andthennot rain

ifnot the crab scratches the sand flat
then the crab leaves the sand in a pile

if the crab blocks the mouth of the hole
and the crab leaves the sand in a pile
then there will be rain and no wind within two days
andthen rain
andthennot wind

ifnot the crab blocks the mouth of the hole
and the crab leaves the sand in a pile
then there will be fine weather
andthennot rain
andthennot wind

if rain
and wind
thenhyp all voyages are off

ifnot rain
andnot wind
thenhyp short voyages ok

ifnot rain
and wind
thenhyp long voyages ok

ifnot wind
and rain
thenhyp short voyages ok, but let's stay home and watch t.v.
Return to Expert System

Download Expert System

Download as .zip archives

Expert Systems and Anthropological Analysis

Michael Fischer ... adapted from 1994, Applications in Computing for Social Anthropologists, Routledge: London and New York

The idea of using an expert system, a computer program that simulates a human expert
(i.e. an informant) in anthropological analysis has been received by anthropologists with
some interest, but with more caution (Davis 1984:3). This caution is justified because to
most anthropologists the inner workings of the expert system are not known; they are
black boxes.  But anthropologists should be interested in a model that claims to represent
and use human knowledge, if only to evaluate that model.  This section describes some of
the basic assumptions in contemporary expert systems, discusses their usefulness to
anthropology, and concludes that many existing expert systems are of limited interest to
anthropologists, although the general model underlying expert systems can be used
productively.


Return to Expert System
2.1 Introduction

Artificial Intelligence (AI) is a multi-disciplinary area in which the goal is to represent
intelligence (usually human intelligence) in the modelling environment of a computer.
There has been research in Artificial Intelligence since there there have been computers.  It
was believed in the ‘fifties that ‘just a few more years’ would bring about a revolution in
AI, but those few years have receded annually 20  In the past decade there have been
developments in AI that are considered by AI researchers (and others) to be partial
successes, among these is the expert system.  Expert systems are computer-based models
that simulate human expertise in a specific area (domain), such as a subset of medicine
(Shortliffe 1976) exploratory geology (Duda 1978), or education (Clancey 1981).  Expert
systems are claimed by AI researchers to be an important advance, and some claim
implications about models of human representation of knowledge, and mechanisms of
inference. (Barr 1982).


There is a small but growing literature on the use of expert systems in anthropology.
Besides Kippen (1988), described in §1.1, Brent (1988) has developed an expert system
to assist in statistical analysis. Furbee (1989) describes an expert system for ‘folk’
classification of soil in the Colca Valley  in Peru. Read and Behrens (1992) describe a
simple expert they developed in 1987 in which they modelled decision making about terms
of address used by Bisayan speakers in the Phillipine Islands adapted from Geoghegan
(1971). Fischer and Finkelstein (1991) wrote a production system which simulated
evaluating a potential marriage partner in an arranged marriage in the Panjab, Pakistan.
Benfer et al (1991) is a good anthropological introduction to expert systems.


Return to Expert System
2.2 Qualitative and Quantitative Analysis

 Qualitative analysis can be defined as identifying qualitative structures, identifying the
states of those qualitative structures, and the pattern of changes (transformations) in those
states 21.  Quantitative methods can sometimes be used to aid this process, but usually
qualitative methods are exclusively used for the analysis of qualitative data and structures
for which quantities proper are difficult to define.

Thom  (1975) argues that all quantitative analysis assumes a firm qualitative foundation.
Before they measure, people must agree that there is something to be measured, and that is
a qualitative judgement. Similarly, people must agree that the measure (metric) they use is
appropriate, and applicable to other phenomena 22.

As an example, consider per capita income.  It is apparently easy enough to agree on the
structure, but the metric is another issue. If currency is used as a metric, a poor family in
the United States would be a wealthy one in Pakistan.  The metric can be further adjusted
by considering cost of living, but an acceptable level of living in the United States is not
equivalent to one in Pakistan.  The problem is not difficult to understand qualitatively;
there are different standards in these two places. The two countries’ per capita income can
be compared quantitatively, but the interpretation of the comparison is qualitative.  The
quantitative analysis is more difficult to reconcile, and indeed is undecidable without
reference to qualitative structures in the two societies.

In most cases quantitative analysis depends on continuity. To quantify a phenomena
meaningfully it is usually necessary to assume that the relation between phenomena and
metric can be described by a continuous function 23, since a primary goals of
quantification is to provide a basis for comparison.  For phenomena where the analytic
focus is on states this is often misleading or impossible.  In most social phenomena there
is no continuous function that can adequately describe the important qualitative
relationships. As an example consider income and education. These are variables which
are of ten given a quantitative definition in social research.  They are relatively easy to
define, and people generally measure income in currency, education in years. But they
often assume linearity is assumed and usually there will be a good correlation between


them.  But it will not be a perfect correlation, as one unit change in the independent
variable will not result in some regular linear unit change in the dependent variable. Now
this is not terribly shocking, since people do not expected that all the variation in one
variable is to be explained by the other, but there is benefit in understanding the
relationship between the variables by breaking the relationship into stages, and examining
the conditions for moving from one stage to the next.  For instance, in the U.S.A. 11
years of education is minimally better than 10 years, but 12 years is far better than 11. This
is due to the local structure of American education; 11 years is pre-graduation, and 12
years is post-graduation. The graduating student has a qualitatively changed educational
status, the pre-graduating student has not significantly changed status.  This type of
analysis helps to give a better account of interactions.

Another reason quantitative analysis must depend on qualitative analysis is illustrated in
Figure 3.The graph shows hypothetical data g and two solutions fitted to that data.
Solution 1 is the better qualitative fit, as the relative shape appears to be the same as the
data, but is not as good a fit quantitatively as Solution 2. Solution 2 fits well quantitatively,
but probably describes a different underlying mechanism altogether.


simulation_5.gif
Figure 3. Two models of g. (adapted from Thom 1975 )


Return to Expert System
2.3 Expert Systems

An Expert System is designed to simulate one aspect of a human expert; the ability to
classify phenomena from a set of attributes.  The expert system is a classification engine.
It  is a system that takes information about a particular case or instance within the domain
of the system and produces a qualitative result (or goal state).  It usually has incomplete
information, and makes qualitative judgements based on this information.. Expert systems
are defined in terms of algorithms in a computer program plus relationships established by
a human expert. This will be interesting to anthropologists if three conditions re met: the
computer should arrive at the same conclusions as a native expert; it should arrive at the
same conclusions as an anthropologist; and it should do useful jobs.

Expert systems, as a class of computer programs, are currently designed tot reflect a
general model current within the artificial intelligence community; an expert system is not





simply a simulation of human expertise, but must be implemented (on a computer) in a
particular fashion; it is a product of an AI culture.  Ideally an expert system has two
primary components (see Figure. 4):

The Knowledge Base.  

The Knowledge Base is essentially a set of rules describing relations between elements in
the domain of knowledge. In the simplest form:

  [condition(s) Æ outcome(s)]

In spite of this notation causality  is  not  assumed. The rules for deriving an outcome from
a set of conditions are always formulated externally by an expert usually aided by a
knowledge engineer,  i.e. a specialist in transforming the expert’s information into
statements suitable for a knowledge base.  The knowledge engineer stands to the expert as
anthropologists do to their informants. The knowledge that is selected for inclusion in the
knowledge base can have a variety of forms, depending on the form of the inference
engine.

Most expert system designers consider it important that the rules be easily inserted,
modified, or deleted from the knowledge base, in any order.  They usually consider the
rules to be weakly connected: there is no sequencing information about the order in which
they can apply, and the only connections between them are the use of common terms of
reference. Thus if one rule determines that a person’s residence is patrilocal, and another
rule can use that residence information to draw further conclusions, the rules are
connected.

The Inference Engine.

An Inference Engine is a method of using the rules in the knowledge base to derive a
conclusion. Using the simple knowledge representation above this might take the form:


if condition then add outcome to the context

Where outcome is the conclusion if condition is true, and context is an area where
knowledge is recorded to determine if conditions are true.  An outcome is often part of
another condition that matches another rule. In other words, the inference engine takes the
rules provided by the knowledge base, and uses internal rules of inference to draw a
conclusion. The claim is that the internal rules are general to all inference. So the inference
engine is a set of rules which are applied to the rules in the knowledge base.

The inference mechanism is thus critical to the outcome; it is responsible for any
interrelation of elements beyond the rules in the knowledge base. It is usually based upon
some variant of logic, such as first-order logic, fuzzy logic (Zadeh 1975), modal logic




(Zeman 1973), or intuitionistic logic (Martin-Löf 1982), and also usually employs some
statistical mechanisms for measurement and classification.  

The inference engine is intended to be based on a general model for using knowledge and
should not have special knowledge about a particular domain. This model is claimed to be
unlike the usual computer program/model structure, because the specifics are separated
from the methods.  This distinction is made for at least two reasons:

a) It makes possible system expertise in different domains by modifying the knowledge
  base without modifying the inference engine.
b) AI researchers assume that in humans knowledge and inference are separate activities
  and that inference is prior to knowledge. Hence it  is theoretically consistent to separate
the two in the computer model.
  (Derived from Barr 1982)

simulation_6.gif
Figure 4. Expert System Schematic

In most existing expert systems the knowledge base and inference engine are not terribly
complex in design.  The knowledge base determines the set of possible outcomes that the
system can consider and the  rules for arriving at those outcomes. Although this requires
great effort on the part of the human expert and the knowledge engineer the form of
representation is quite simple.

In many systems both outcomes and rules have an objective or subjective probability
associated with them, again derived from the human expert.  The knowledge base consists
of high-level structures derived through the formidable pattern matching and inference
skills of humans.


An inference engine has three parts; an identification mechanism, an evaluation
mechanism, and a goal mechanism.  The first two constitute the inference mechanism
proper, and the third is for finding efficient paths to an outcome; it does not strictly affect
the outcome (unless it is poorly designed), but it selects the best  condition to request data
on rather that requesting all possible conditions in the knowledge base. So the goal
mechanism is a search pattern through the possible conditions that apply to a case, and it is
the goal mechanism that gives the expert system the appearance of performing like a
human expert by requesting a minimum of information.  The inference mechanism gives
the expert system the judgement to announce a result consistent with the knowledge base.
Most of the successful (externally validated) expert systems use some form of probabilistic
model (often Bayesian) as the basis of the inference mechanism, using the probabilities
associated with the knowledge base.  One common goal mechanism works by finding the
goal that is most likely to be true at the current time, and then finding the condition that will
give the most information about that goal (as defined by the evaluation mechanism).


Return to Expert System
2.4 A simple example

Consider the  factors that influence the marriages arranged by urban Punjabis of Lahore
(Fischer 1991a; Fischer 1991b).  Marriages are arranged in the Punjab by the parents and
other relatives of the potential groom or bride.  The following factors (not necessarily in
this order) appear to be the most important in the evaluation of a possible spouse :

  1 zat   (sometimes glossed as caste)
  2 jihez   (dowry)
  3 intellect
  4 education
  5 haq  mehr  (bride deposit)
  6 beauty
  7 izzat   (honour, respect, responsibility)
  8 baradarie   (clan)
  9 rishtidar   (relative)
  10 distance   (from natal home)

These are not Panjabi selection criteria, but an anthropologist’s measurement or probe of
the semantic domain of selection derived from what Panjabis say.  In addition, the
selection is influenced by the size of social networks and the availability especially of
females, who are supposed to be invisible before (and after) marriage except to relatives.

The relationships between these measurements are quite complex, and they are evaluated
relatively.  For instance, if the  zat  of two candidates is different, then what constitutes
enough izzit will be different in each case.  In other words the state of enough izzit varies,
depending on at least one other value. Amounts measured are not evaluable without other
context; there is a high degree of relativity.  Moreover it is probable that different people
have different selection models, and one person may have more than one.


To construct an expert system based on this situation:

1 What will the expert system do? Give a statement of the suitability of possible marriage
  partners.

2 How will the expert system do it? This is a fixed solution (relative to a particular
  inference mechanism), since an expert system uses the same inference method
regardless of the knowledge domain. Initially assume a simple mechanism; internal
rules derived from examples of previously considered marriages given a suitability
judgement by local experts.  These rules can be derived using a statistical mechanism
which weights the effect on each marriage of each of the factors.  In essence  the rules
treat each factor as a dimension in a multidimensional space, and locate each qualitative
state (suitable/notsuitable) within that space, given a value for each axis.  When the
expert system is consulted, the evaluation mechanism will test to see if the input factors
required by the inference engine are within a statistically significant distance from the
internal rule-derived values.  The goal mechanism will find the factor that makes the
biggest difference in continuing evaluation, and ask for that information.

  This is known as forward chaining because it works from factors to outcomes. Many
current expert systems turn the above goal  mechanism  on  its head or side, called
backwards chaining and sideways chaining respectively.  Backwards chaining  is
favoured  for  systems  that have a large number of outcomes, much like the above
example if  all the  individuals  in the marriage universe are included as part of the
knowledge base. In this type of  system, the  expert  system would start attaching
probabilities to each person in the  base,  and  finding  information that would remove a
person from consideration.  This is called backwards chaining because it works  from
solutions  to factors, and appears more purposeful.(Nilsson 1982) In this case a person
is the outcome rather  than a  simple  yes or no.  Sideways chaining works a bit on both
principles,  finding  both  weighted  factors  and weighted solutions.

3 What kind of data will it require? The data is dependent on the kind of inference
  mechanism used.  In this simple case the data will be of the form:

  marriage {value of factors 1-10}

  where the value will have already been weighted by the human operator; in terms of too
little, too much, enough, where appropriate, yes, no, same, and different. The
weighting in this example is assumed to always be from the son-giving side. This
would give us a knowledge base like that in Figure 5:


Factor Marriage 1 Marriage 2 Marriage 3
zat same different same
jihez enough too low too high
intelligence enough too high too low
relative yes no yes
education too low enough too high
haq too low too high enough
beauty enough enough too little
izzit enough too high too low
bradarie same different same
location too far ok ok
suitability yes no yes

Figure 5.  Example measurements for marriage model.

In consultation the expert system takes in the knowledge base, creates internal rules, and
answers the request, which would be for the suitability of a possible marriage. (fig 3) To
derive an answer it asks the user to give values for some of the factors until it is possible to
determine the qualitative result, and then makes a pronouncement, yes or no. Note it can
not ask the suitability question itself, as this is the purpose of the system, but requires this
for the knowledge base input to form the rules.


Return to Expert System
2.5 A knowledge-based example

The example in §2.4 is fairly easy to follow, but has little depth because of the immense
amount of analysis that is needed to set it up; for it to work it must be told to seek the
correct information (the selected criteria), and that is known only after analysis.  It also
fails to take into account any higher-level ethnographic or ethnological knowledge, it is a
purely descriptive model with no explanatory power. Additionally, the particular method
described is heavily committed to a particular model in the formulation of rules, and
assumes that the results are linearly differentiable; that is, that each state has a unique
coordinate range in the multi-dimensional space.

Most expert systems incorporate higher-level knowledge, in the form of explicit rules in
the knowledge base.  The previous example can be greatly improved in performance by
adding rules of the following type to the knowledge base:

1) if zat is same then izzat is enough.
2) if bradarie is same then izzat is enough.
3 if relative is yes then zat is same.
4) if relative is yes then bradarie is same.
5) if distance is too far and relative is yes then  distance is ok

and so on.  These kinds of rules add information about factors that cannot be taken into
account in a regular, statistical method.  One might ask why the entire system could not be
based from rules like these, freeing the system of dealing with deriving rules from
empirical data altogether.  The answer is that one can, and most working systems do.


However, although the rules appear to be ‘higher-level’, they are no less empirical with
respect to the expert system, and provide no explanation for the outcome that is not in the
rules to begin with.  This defect is usually overcome in expert systems by the expert and
the knowledge engineer adding comments to each rule so when a user inquires about the
reason a particular conclusion has been reached, comments are displayed for each rule in a
successful derivation of the conclusion.

We need not limit ourselves to such simple kinds of factors. For example, consider some
rules adapted and simplified from Fischer and Finkelstein (1991):

if girl is immoral then marriage is not a good risk
if mother is immoral then daughters are probably immoral
if girl is immoral then younger sisters may be immoral
if girl plays suggestive music then girl is immoral

believed: ‘girl played suggestive music’
conclusion: ‘marriage may not be a good risk’

Even in this simplified model it is clear that much of the complexity of the computing
component is in the goal mechanism, which ideally has no analytic effect on the final
outcome, whereas the inference mechanism is relatively simple, using models that are
more or less in common usage in descriptive analysis.  In spite of this many expert
systems do often succeed in making judgements consistent with the human experts they
are based on (Michie 1982).  They achieve this by representing knowledge as a set of local
models, made up of one or more rules, that are only weakly (and informally) interrelated,
rather than by have a single large formal model of the expert’s knowledge.

Of course the degree of interrelation varies from system to system. For example in most
learning systems, the initial set of structures it is told to learn about have  been carefully
selected to be independent of each other statistically.  In input rule based systems,  the
rules will have been carefully selected.  Most successful systems have undergone an
enormous amount of tuning and pruning to achieve there results, using rules similar to the
latter example.  But the point remains  that the knowledge base consists of a large number
of conditions and outcomes, and are not generally arranged in a deterministic  structure  by
the  human expert, rather they represent bits of information that  are  connected by  the
sense of relevance that the human expert gives them.  It is the inference engine's role to
reconstruct this  relevance.   Both  the former and latter style of knowledge base share the
same assumption: that there is each  outcome  has some non-intersecting set of derivations
with respect to other outcomes.

Most current expert systems also have a probabilistic component.  The knowledge base is
for the most part entered in the form of ‘higher-level’ rules, but objective and subjective
probabilities are attached to the conditions and outcomes by the human expert. This is one
way to allow the derivation of the outcome to be partial; the outcome need not be
absolutely defined with respect to the knowledge base, only defined to some arbitrary


degree of probability.  This greatly amplifies the capacity of the expert system to classify,
since it is not restricted to finding exact matches to what has been encountered before, but
rather comparing as prototypes, simulating the capacity of human experts to make
judgements on new cases.

There are several ways to account for the success of current expert systems.  First, since
the local models as presented to the expert system are only descriptive models, and the
overall system is a performance model, no internal explanation need be generated; the
expert system is judged only on its descriptive performance. Second, modern statistical
methods are quite powerful descriptively, so one could expect them to be reliable
descriptors when used.  Third, the knowledge base is created, selected and pruned by
humans and consists of human expert judgements. This is also true of information
supplied to the expert system while it is operating. So it is assumed that the human can
answer the questions asked by the expert system appropriately and correctly.  So in many
ways the success of contemporary expert systems is a sleight of hand; all the human
interaction in the process is taken for granted. But it is fair to say that all the expert system
designer is claim to do is represent the knowledge of a human expert, not to create a
human-like expert.

From an anthropologist’s point of view the rule-based model is preferable to the statistical
one, but makes no difference to the goal of the expert system, which is simply to
descriptively mimic an expert.  No current system can do more; expert system writers
might claim psychological reality (many do not), but that is a far cry from establishing
psychological reality, as the debates (see: Buchler and Selby 1968;, Burling 1969.}] over
the new anthropology of the ‘sixties demonstrate.

Anthropologists may still find possible significance for anthropology in the general model
underlying the expert system.  A model of some major segment of human action need not
be a single large formal model, but a series of weakly interacting local models. If these can
be stated consistently, anthropologists can explore at least descriptively how the models
interact with each other.


Return to Expert System
2.6 Conclusion

The goal of an expert  system is to make qualitative judgements, to predict the state of a
system relative to contextual data. However it may not be clear how an expert system can
help in qualitative analysis. After all, if you have to provide the model, what is the expert
system doing for you? This is not a fair argument as it applies to any computer based aid.
It does nothing that you could not do given pencil and paper, in ten or twenty years.  The
computer in this role amplifies what can be done.

There are two more serious objections that can be raised.  One is the hidden model
objection.  which rests on what happens in the black box of the inference engine to the
model or data that was entered. This is a problem only if there is no control over the
identification and evaluation mechanisms in the system.  In general the other mechanisms


are not terribly important; for example it is not important from an analytic point of view
whether the  goal mechanism is a forward or backward chaining strategy .  That is a
description of how the information is ordered and accessed internally, rather than how it is
evaluated.  However, it is critical to control, or at least understand, the internal evaluation
method, for the analyst is locked into the limited range of possible models that a given
system can accommodate.  This is strictly an issue of access to programming skill (Read
and Behrens 1992:250).

The second objection is to the formal or theoretical basis of the general model of an expert
system.  As outlined above, all current expert systems work more or less upon one general
macro-method; given a list of symptoms and a list of outcomes the systems evaluates the
most likely state(s) (outcome) for the system to take at each point of the analysis.  The
generalised expert system model attempts to achieve this global scope without explicitly
laying down all the paths, rather piecing together a unique solution for each unique
situation, using only a series of small, local models and a general inference mechanism as
the basis.  It does this not by incorporating a single exhaustive model relating all possible
states to each other, but uses individual instances of information and relates them
according to a weak interaction internal model.  There is formal support for the weak
interaction model in mathematics from Thom (1975), and in anthropology and simulation
from Zackary (1980).

The problem with using expert systems in anthropological analysis is created by the split
between knowledge base and inference engine;  in general the non-programmer
anthropologist can only control  the knowledge base.  Regardless of the type of models
that the anthropologist sets up in the knowledge base, the inference model must be known
to evaluate the interaction of the models as anticipated. This makes the system suspect for
analysis unless one knows the inference model in detail, and is satisfied that it realistically
represents the assumptions that must be made.  This objection is not to the general
approach, but to the fixation to a particular global model, the evaluation mechanism.  This
problem is not unique to expert systems, but arises in any use of simulation to test models:
the result of an model must always be tested against another model before it can be
interpreted. The properties of the evaluation model must be known and consistent with its
purpose.  If the problem of control can be overcome then the general expert system model
has potential as a means of exploring the interactions of a large number of local models
towards a set of global responses; a method of qualitative simulation.



Return to Expert System