The Quest for a Thinking Assistant, Part II

The tools I discussed in my earlier post on this topic only attempt to deal with the storage of information. Organization of knowledge, attempts to integrate various facts or propositions, relies upon the human entirely. The toolsare designedto improve the efficiency with which information can be organized, while allowing for changes in the organizational structure. As previously noted, I have not found a viable solution yet which handles that challenge well.

Another of my long-term fascinations has been in the area of what is conventionally called “Artificial Intelligence”. For years, this interest was centered on the possibility- which fueled the majority of AI research – that a machine couldactually attain “intelligence”, and evenconsciousness. After a great many years of attempting to integrate this possibility with the tenets of Objectivism – and this only after becoming a serious student of Objectivism – it became clear to me that such a machine can indeed never exist. Specifically, it is impossible for a machine to attain a rational conciousness, as this requires a reasoning faculty, aware of its surroundings, operating with free will, performing goal-directed actions. Free will is impossible to devise in a machine, therefore values cannot be held by a machine (that connection requires additional comment, but not tonight), therefore goals cannot be developed, and consciousness cannot occur.

I’ll leave that direction of thought for a later discussion. My purpose tonight is to discuss my search for a thinking assistant using AI concepts. What can be achieved by a machine is a repository of information, combined with a logic processor. There are two approaches to combining these elements which AI research has developed to a level of sophistication sufficient to be considered as a possible route to eventually creating a useful tool for storage and maintenance of structured information. In this, and an ensuing post, I’ll discuss each in turn.

The first approach of interest is in the development of declarative programming languages. These languages, of which Prolog is the flagship, rely upon an interpreter in which is contained a generalized logic processor. This processor can determine the truth or falsity of statements of symbolic logic [the Objectivist reader cringes], once the truth-state of the parameters used by a symbolic logic statement are determined. The language harnesses this logic processor through definition of a grammar which allows a programmer to state a set of “facts”, and then construct a complex logical statement using these facts, which the logic processor will attempt to “prove”. In the process of solving the logical problem posed, the system achieves the programmer’s desired goal through various “side effects” generated as the system traverses the logical statement and facts in the quest for all possible “solutions” to the problem.

If this sounds like a simply bizarre way to create a software program, let me assure you, as a standard procedural programmer, I found this approach almost laughably alien to grasp. As a general-purpose programming language, the power of this approach is in the ability to create extraordinarily complex computational flows indirectly through the use of clever logical statements and a firm understanding of the algorithm underlying the logic processor. I reached the point of “seeing” this power, but certainly have not mastered the art of harnessing it effectively.

That being said, the underlying combination of a database of “facts” combined with a logical processor remains intriguing. A common “text book” use of Prolog is to create an expert system. An expert system is a set of encoded facts, which can be asked an encoded question (or series of questions)to receive an encoded answer. In the most trivial of expert systems, the question posed will have a True or False answer, and the system will respond with “True” or “False”. Slightly more advanced is a system structured as a diagnostic tool, which asks the usera series of questions, starting from the most general to increasingly specific, with the questions selected by the system based upon the previous answers of the user. Most of us have undoubtably dealt with various automated help systems that are poor implementations of this form of expert system.

Can one use such a declarative language system to encode a body of knowledge, and then examine its internal consistency? Or ask a question and receive an answer? The trick here is in the encoding, as is the case in any attempt to reduce knowledge to a symbolic logical form. Each “term” in the system needs to be exhaustively defined, along with all of its inter-relationships with other terms. A key problem is setting boundaries on the universe of terms introduced to the system – there will always be a practical limit to the number of terms and relationships that the system can manage, and the problem of precise definition of the terms themselves, andof all actual connections between terms rapidly becomes overwhelming. It is indeed quite likely that the task of creating a sufficient database of encoded factsis always impossible to accomplish.

I recall seeing one attempt to examine the possibilities of using such a system in this manner. In fact, the author of the software attempted to use Objectivism-based concepts as the terms in several of his examples. I have no idea if this system is still available, but if I find it on the Internet, I’ll be sure to post a link for interested parties. My recollection was that I could not at all understand the encoding scheme he was attempting to use, and made little or no headway in creating my own examples.

Report This Post

Leave a Reply

Your email address will not be published. Required fields are marked *