Category Archives: Artificial Intelligence

The Quest for a Thinking Assistant, Part II

The tools I discussed in my earlier post on this topic only attempt to deal with the storage of information. Organization of knowledge, attempts to integrate various facts or propositions, relies upon the human entirely. The toolsare designedto improve the efficiency with which information can be organized, while allowing for changes in the organizational structure. As previously noted, I have not found a viable solution yet which handles that challenge well.

Another of my long-term fascinations has been in the area of what is conventionally called “Artificial Intelligence”. For years, this interest was centered on the possibility- which fueled the majority of AI research – that a machine couldactually attain “intelligence”, and evenconsciousness. After a great many years of attempting to integrate this possibility with the tenets of Objectivism – and this only after becoming a serious student of Objectivism – it became clear to me that such a machine can indeed never exist. Specifically, it is impossible for a machine to attain a rational conciousness, as this requires a reasoning faculty, aware of its surroundings, operating with free will, performing goal-directed actions. Free will is impossible to devise in a machine, therefore values cannot be held by a machine (that connection requires additional comment, but not tonight), therefore goals cannot be developed, and consciousness cannot occur.

I’ll leave that direction of thought for a later discussion. My purpose tonight is to discuss my search for a thinking assistant using AI concepts. What can be achieved by a machine is a repository of information, combined with a logic processor. There are two approaches to combining these elements which AI research has developed to a level of sophistication sufficient to be considered as a possible route to eventually creating a useful tool for storage and maintenance of structured information. In this, and an ensuing post, I’ll discuss each in turn.

The first approach of interest is in the development of declarative programming languages. These languages, of which Prolog is the flagship, rely upon an interpreter in which is contained a generalized logic processor. This processor can determine the truth or falsity of statements of symbolic logic [the Objectivist reader cringes], once the truth-state of the parameters used by a symbolic logic statement are determined. The language harnesses this logic processor through definition of a grammar which allows a programmer to state a set of “facts”, and then construct a complex logical statement using these facts, which the logic processor will attempt to “prove”. In the process of solving the logical problem posed, the system achieves the programmer’s desired goal through various “side effects” generated as the system traverses the logical statement and facts in the quest for all possible “solutions” to the problem.

If this sounds like a simply bizarre way to create a software program, let me assure you, as a standard procedural programmer, I found this approach almost laughably alien to grasp. As a general-purpose programming language, the power of this approach is in the ability to create extraordinarily complex computational flows indirectly through the use of clever logical statements and a firm understanding of the algorithm underlying the logic processor. I reached the point of “seeing” this power, but certainly have not mastered the art of harnessing it effectively.

That being said, the underlying combination of a database of “facts” combined with a logical processor remains intriguing. A common “text book” use of Prolog is to create an expert system. An expert system is a set of encoded facts, which can be asked an encoded question (or series of questions)to receive an encoded answer. In the most trivial of expert systems, the question posed will have a True or False answer, and the system will respond with “True” or “False”. Slightly more advanced is a system structured as a diagnostic tool, which asks the usera series of questions, starting from the most general to increasingly specific, with the questions selected by the system based upon the previous answers of the user. Most of us have undoubtably dealt with various automated help systems that are poor implementations of this form of expert system.

Can one use such a declarative language system to encode a body of knowledge, and then examine its internal consistency? Or ask a question and receive an answer? The trick here is in the encoding, as is the case in any attempt to reduce knowledge to a symbolic logical form. Each “term” in the system needs to be exhaustively defined, along with all of its inter-relationships with other terms. A key problem is setting boundaries on the universe of terms introduced to the system – there will always be a practical limit to the number of terms and relationships that the system can manage, and the problem of precise definition of the terms themselves, andof all actual connections between terms rapidly becomes overwhelming. It is indeed quite likely that the task of creating a sufficient database of encoded factsis always impossible to accomplish.

I recall seeing one attempt to examine the possibilities of using such a system in this manner. In fact, the author of the software attempted to use Objectivism-based concepts as the terms in several of his examples. I have no idea if this system is still available, but if I find it on the Internet, I’ll be sure to post a link for interested parties. My recollection was that I could not at all understand the encoding scheme he was attempting to use, and made little or no headway in creating my own examples.

Report This Post

An Old Kind of Science

In my current thrashing about within AI and computational science topics, I returned once again to consider Wolfram’s New Kind of Science material. In brief review, Steve Wolfram has spent the better part of his life fascinated with computational complexity arising from simple algorithms. Initially working from cellular automata systems, Wolfram has built up an impressive set of equivalent computational systems all exhibiting the same fundamental patterns of behavior. These patterns are classified into four types – Type 1 systems evolve into static conditions; Type 2 evolve into periodic patterns; type 3 become “chaotic”; Type 4 evolve into the most interesting behaviour that is best described as “complex” or structured randomness. After spending many years studying a vast variety of simple systems, Wolfram published an enormous treatise, “A New Kind of Science”, back in 2002 to describe his findings.

I had been following Wolfram and the general field of complexity, and in particular automata, off and on throughout my post-college years. When I heard of this new work, I couldn’t wait to receive a copy (a Christmas present from my wife, if I recall correctly). I devoured the book upon receiving it, and was thoroughly…. shocked and disappointed! Here was not a revolutionary theory explaining computational complexity. Instead, Wolfram had produced an overwhelmingly conceited presentation of example after example of complexity arising from simple systems, but with nothing to offer in the way of theory. Indeed, in his opening remarks, he describes his “new kind of science” as that which doesn’t fit the traditional dogma of science requiring hypothesis and proof, and the build up of mathematical explanatory theory. Rather, this seems to be “science by example”. Although intriguing examples are presented, he offers no hope of creating a method by which to discover which simple machines are good at representing a given phenomenon. The applied use of this “science” instantly becomes highly questionable. Wolfram seems to favor brute force searching for machine rules which are of interest.

He is wrong in describing his form of “science” as new. Rather, he should have used the term “primitive”. Wolfram’s study (and those of his followers) is reminiscent of early observational astronomy. When I read through the vast number of examples, presentations of “interesting” observations, displays of aesthetically pleasing patterns, and hear this described as “science”, I am reminded of Tyco Brahe cataloging the exact positions of points of light in the firmament. Even in modern astronomy, there is this role of the passive observer who catalogs observation after observation.

About a month ago, I was driving my family home from the movies, and I observed a fireball. My wife doubted what we had seen, but from my previous experience I was sure it was a fireball. A couple of days later, I happened upon the American Meteor Society (amsmeteors.org), and actually filled out a fireball observation report. Sure enough, within a few days I received an email indicating that three other observations corroborated my sighting. Although this was a “fun” exercise, what purpose is served by the AMS? They simply catalog observations. Meteors are just about fully understood, and certainly no set of amateur sightings is going to advance the state of our knowledge of meteors.

Returning to Wolfram’s field, what I see here is a very primitive set of observations. From both reading his enormous book, and looking through the discussion forums he has sponsored (wolframscience.com), I see very little interest in the creation of an explanatory or predictive theory for computational complexity. I see a fascination with examples, I see an overblown interest in aesthetics (which are truly meaningless, as they are purely a function of the presentation of the data generated by these systems), and just about no interest in applications of the field of knowledge to address real world problems. What I see, in essence, is an amazingly powerful failure.

Despite my grim assessment of Wolfram’s approach to the field of study he has (primarily) defined, I still believe it is a fundamental key to understanding much of the world that surrounds us.

Report This Post

Thoughts on Neural Networks

I’m nearing the completion of On Intelligence, past the point where the author stops presenting his sketch of a theory of intelligence, and moves into the land of speculation. He has answered the question “what is consciousness” in a rather straight-forward manner (it is the experience of being intelligent), but has not yet tackled the question of free will.

Listening to this book has lead me back to reading articles from the rather large collection of journals I have on the topic of computational intelligence. The journals are IEEE Transactions on Neural Networks, on Evolutionary Computation, and on Fuzzy Logic. The papers in the first two journals (I have not actually read any of the Fuzzy Logic journal to date) are mostly very narrow investigations of esoteric topics in their fields. A neural network applied to this problem needed to be modified from its traditional form in such and such a manner, and either solves, or sort of solves the problem. Other papers are taxonomies of algorithms or computational structures all falling under the same heading – and the author adds some little twist to the last category he describes. These are the papers that can usually be skipped without missing anything essential – and they are all written by grad students, with some professor listed as one of the authors.

Occasionally, there is a massive article that introduces some fundamentally new concept. But (so far) even those fundamental concepts are not so fundamental as to significantly advance the field. Even rarer are articles that seek a border between theory and reality, or the rarest of all – theory and philosophy. They are in there, but they are tough to spot.

Report This Post