“Expert Systems” (ES) are complex AI (artificial intelligence) programs. Strong AI addresses emulation of human cognition like Natural Language Processing (NLP). Weak AI, on the other hand, deals with “Knowledge Engineering” and is concerned with engineering of systems to perform so called “intelligent” human tasks, like Medical Diagnosis, optimal portfolio selection and battle plan management. The most widely used way of representing domain knowledge in ES is as a set of production rules, which are often coupled with a frame system that defines the objects that occurs in the rules. The frame (expert system shell) format will depend upon the problem domain (its nature and size) and also on the nature of the problem-solving task. Undoubtedly the most difficult task in designing a knowledge-based system is the “knowledge elicitation bottleneck” mainly due to paradox of expertise. It is often said that experts do not know whatthey know. (The well-known Greek philosopher Socrates called himself clever because he knew that he knew nothing!).
After eliciting the domain knowledge, successfully reformulating it into the system is of utmost importance. One way of doing it is ‘scenario analysis’ (a structured interview), involving a face-to-face interview with the expert, recorded by video or audio tape for subsequent transcription and analysis. Another common method is ‘protocol analysis’, observing the expert actually doing the job. Protocols are later made from the records. Unfortunately, information obtained from the protocols may not be sufficient to render sensible rules.
Study of the organization of knowledge is known as Epistemology. After acquiring the knowledge, ‘epistemological analysis’ is conducted to establish the structural properties of expertise.
Ultimately, ‘logical analysis’ is performed to map the knowledge to a formal structure. At this stage of the design process, the expert’s knowledge is reformulated for the system. After the knowledge engineer codes the knowledge explicitly in the knowledge base, the subject expert (clinician) may then critically evaluate the system (repeatedly) until a satisfactory performance level is reached.
For medical diagnosis, there are scopes for ambiguities in inputs, like, history (patient’s description of the diseased condition), physical examinations (especially in cases of uncooperative or less intelligent patients), laboratory tests (faulty methods or equipment). Moreover, for treatment, there are chances of drug reactions and specific allergies), patients non-compliance of the therapy due to cost or time or adverse reactions. Because of advent of new modalities of treatment, almost daily, decision making towards a particular treatment regime to be adopted for each individual patient becomes a complex process. More often, a large amount of information has to be processed, much of which is quantifiable. Intuitive thought processes involve rapid unconscious data processing and combines available information by law of average and therefore has a low intra- and inter-person consistency. So, the clinician of today should move towards analytic decision making, which albeit typically slow, is conscious, consistent and clearly spells out the basis of decision.
The other branch of AI is ‘ANN’ or artificial neural networks. Because of their interconnections, ANN approach is often called connectionist. Connectionist ES are ANN based ES where the ANN generates inferencing rules e.g., fuzzy-MLP where linguistic and natural form of inputs are used. Apart from that, ‘rough set theory’ may be used for encoding knowledge in the weights better and also GAs (genetic algorithms) may be used to optimize the search solutions better. All these methods fall under the purview of “soft computing”.
For instance, a trained MLP4 is used for rule generation in ‘If-Then’ form. These rules describe the extent to which a pattern belongs or not to one of the classes in terms of antecedent and consequent clauses. For this, one has to backtrack along maximal weighted paths using the trained net and utilize its input and output activations. After training, the connection weights of an MLP encode among themselves (in a distributed fashion) all the information learned about the input-output mapping. Therefore, any link weight with a large magnitude reflects a strong correlation between the connecting neurons. This property is utilized in evaluating the importance of an input node (feature) on an output layer mode (decision). For this one needs to compute the path weights from each output node to each input node through the various hidden nodes. Each output-input path that is maximum (through any hidden node) denotes the importance of that input feature for arriving at the corresponding output decision. One or more hidden layers may be considered (if necessary) for evaluating the path weights. An increase in the number of hidden layers simply leads to an increase in the possible variety of paths being generated through the various hidden nodes in the different layers. A heuristic thus allows selection of those currently active input neurons contributing the most to the final conclusion (among those lying along the maximum weighted paths to the output node f) as the clauses of the antecedent part of a rule. Hence, it enables the currently active test pattern inputs (current evidence) to influence the generated “knowledge base” (connection weights learned during training) in producing a rule to justify the current inference. The complete If part of the rule is found by ANDing the clauses corresponding to each of the features.
Because of the inherent fuzziness (qualitative nature) of most biomedical data, connectionist expert systems are likely to fare better than their traditional counterparts, as far as diagnosis reaching is concerned.
Spatiotemporal contextual information can be incorporated for automated EEG analysis through syntactic analysis techniques. Here, EEG is represented as a series of elementary patterns called “tokens”. After subdividing an EEG tracing into 1-second intervals, a sequence of unique characters may be called a “label”. The resulting sequence of “labels” is usually known as a “sentence” which is “parsed” by a definite “grammar”. A grammar consists of a set of rules that governs the merging of tokens into higher-order recognizable entities in the input data.
A severe drawback in this approach lies in the amount of heuristics involved in writing the grammar. Also, it is almost impossible for an untrained electroencephalographer (EEGer), to inspect the grammar and make the necessary modifications. Knowledge-based approaches purvey a way of increasing the involvement of the expert (EEG’er) in the design process and also allow the utilization of contextual information to a larger degree. These systems apply a body of knowledge (“knowledge-base”) to the input data and subsequently derived facts (“data-base”). Typically, the knowledge base comprises IF<premise>THEN<action> type rules which presumably reflect the human expertise. Another advantage of this type of flexible problem solving approach is that the collection of rules may be modified from time to time. Knowledge-based enhancement of EEG signals and parallel processing are being employed to save time and effort, as well as to increase the accuracy of the interpretations.