Category Archives: Technology

The Quest for a Thinking Assistant, Part II

The tools I discussed in my earlier post on this topic only attempt to deal with the storage of information. Organization of knowledge, attempts to integrate various facts or propositions, relies upon the human entirely. The toolsare designedto improve the efficiency with which information can be organized, while allowing for changes in the organizational structure. As previously noted, I have not found a viable solution yet which handles that challenge well.

Another of my long-term fascinations has been in the area of what is conventionally called “Artificial Intelligence”. For years, this interest was centered on the possibility- which fueled the majority of AI research – that a machine couldactually attain “intelligence”, and evenconsciousness. After a great many years of attempting to integrate this possibility with the tenets of Objectivism – and this only after becoming a serious student of Objectivism – it became clear to me that such a machine can indeed never exist. Specifically, it is impossible for a machine to attain a rational conciousness, as this requires a reasoning faculty, aware of its surroundings, operating with free will, performing goal-directed actions. Free will is impossible to devise in a machine, therefore values cannot be held by a machine (that connection requires additional comment, but not tonight), therefore goals cannot be developed, and consciousness cannot occur.

I’ll leave that direction of thought for a later discussion. My purpose tonight is to discuss my search for a thinking assistant using AI concepts. What can be achieved by a machine is a repository of information, combined with a logic processor. There are two approaches to combining these elements which AI research has developed to a level of sophistication sufficient to be considered as a possible route to eventually creating a useful tool for storage and maintenance of structured information. In this, and an ensuing post, I’ll discuss each in turn.

The first approach of interest is in the development of declarative programming languages. These languages, of which Prolog is the flagship, rely upon an interpreter in which is contained a generalized logic processor. This processor can determine the truth or falsity of statements of symbolic logic [the Objectivist reader cringes], once the truth-state of the parameters used by a symbolic logic statement are determined. The language harnesses this logic processor through definition of a grammar which allows a programmer to state a set of “facts”, and then construct a complex logical statement using these facts, which the logic processor will attempt to “prove”. In the process of solving the logical problem posed, the system achieves the programmer’s desired goal through various “side effects” generated as the system traverses the logical statement and facts in the quest for all possible “solutions” to the problem.

If this sounds like a simply bizarre way to create a software program, let me assure you, as a standard procedural programmer, I found this approach almost laughably alien to grasp. As a general-purpose programming language, the power of this approach is in the ability to create extraordinarily complex computational flows indirectly through the use of clever logical statements and a firm understanding of the algorithm underlying the logic processor. I reached the point of “seeing” this power, but certainly have not mastered the art of harnessing it effectively.

That being said, the underlying combination of a database of “facts” combined with a logical processor remains intriguing. A common “text book” use of Prolog is to create an expert system. An expert system is a set of encoded facts, which can be asked an encoded question (or series of questions)to receive an encoded answer. In the most trivial of expert systems, the question posed will have a True or False answer, and the system will respond with “True” or “False”. Slightly more advanced is a system structured as a diagnostic tool, which asks the usera series of questions, starting from the most general to increasingly specific, with the questions selected by the system based upon the previous answers of the user. Most of us have undoubtably dealt with various automated help systems that are poor implementations of this form of expert system.

Can one use such a declarative language system to encode a body of knowledge, and then examine its internal consistency? Or ask a question and receive an answer? The trick here is in the encoding, as is the case in any attempt to reduce knowledge to a symbolic logical form. Each “term” in the system needs to be exhaustively defined, along with all of its inter-relationships with other terms. A key problem is setting boundaries on the universe of terms introduced to the system – there will always be a practical limit to the number of terms and relationships that the system can manage, and the problem of precise definition of the terms themselves, andof all actual connections between terms rapidly becomes overwhelming. It is indeed quite likely that the task of creating a sufficient database of encoded factsis always impossible to accomplish.

I recall seeing one attempt to examine the possibilities of using such a system in this manner. In fact, the author of the software attempted to use Objectivism-based concepts as the terms in several of his examples. I have no idea if this system is still available, but if I find it on the Internet, I’ll be sure to post a link for interested parties. My recollection was that I could not at all understand the encoding scheme he was attempting to use, and made little or no headway in creating my own examples.

Report This Post

The Quest for a Thinking Assistant, Part I

Throughout my career, both as an engineer and as a lifelong student on a large host of topics, I have repeatedly faced the problem of how best to organize information in an external representation to reflect and aid in my internal understanding of a field of study. By “external representation” I mean anything from a simple handwritten notebook to an online database of facts. I have tried a great variety of methods (and have succeeded in gaining a greater variety of knowledge) over the years, but none have been sufficient to allow me to refer back to them at a future date to efficiently recall and recover my internal knowledge to a level I deem acceptable.

A spiral-bound notebook was generally the first method to which folks of my generation were introduced. While reading a text, or listening to a lecture series, notes are taken in a serial manner by hand. The structure of the serial notebook is static, editing is difficult to impossible, so any later effort to reorganize the material to better represent the abstractions and relationships arrived at through contemplation of material requires a reproduction of the material in a new notebook. Folding in a second source of material – merging notes from a lecture with notes from a text, for example – is equally difficult. A small step up is the loose bound notebook, which at least allows insertion of later material, but the other serious problems of static text remain.

And then personal computers arrived. Static text was no longer a problem, though the process of collection of material remained handwriting, or required reading, or listening torecorded lectures,in front of a terminal. I used a rather slick word processor (ChiWriter), which allowed the use of mathematical symbols (my main interest in that period being mathematics of dynamic systems), but I rapidly found the process of creating and organizing a large electronic notebook daunting. The structure of the notebook was generally dictated by the first large text I read on a topic. Then subsequent material had to be manually merged into this structure until it became evident that the structure was imperfect and needed a different hierarchy. Dealing with the mess of reorganization in a flat word processor made the whole thing terribly arduous and distracted mightly from the process of learning.

Next came Think Tank, a DOS program which was really no more than an outlining program. This was marginally better in principle, as larger sections of text could be manipulated in a collapsable grouping, but the program was not really intended to hold large bodies of text, and – with my interests still primarily requiring mathematical notation – lack of anything other than ASCII input made the tool fairly useless.

More recently, I have examined the use of “mind mapping” software systems (MindManager by MindJet is a commercial product, though FreeMind and CMaps are equivalent or even better freeware systems). At first, these looked more interesting, by allowing a more general mapping of concepts and relationships in a not-necessarily hierarchial order. However, these tools fail on two counts. First, there remains the clumsiness of dealing with large amounts of detail in a pictoral representation (there are offered solutions to this, but they consist of mere hyperlinks to documents). But the more fundamental failure is that knowledge is hierarchial, and allowing for freeform relationships between concepts leads to a much more confusing, and ultimately non-rational, representation of the data to be organized.

Report This Post

Engineering Bravery part II

It is proper for engineers to assess risk in the undertaking of any project, and to monitor the development and implementation of plans to mitigate the identified risks, to raise the probability of success in the endeavor. A standard approach to measuring risk is to evaluate the probability of the risk occurence, and the magnitude of the consequence on equal scales (for example, a scale of 1-5). The overall risk is then computed as the product of the probability score and the consequence score. Depending on the magnitude of the effort, and the nature of the identified risk, it may be appropriate for some risks to be accepted as present which are either impossible to mitigate, or for which mitigations may be prohibitively expensive or time-consuming to implement. (In this latter case, attempting to resolve the risk will create a new risk to the project in the form of a cost or schedule problem).

However, as discussed in a previous post, it is evident that there is an increasing resistance to the acceptance of risk across a large segment of industry. What is the underlying source of this increase in risk adversion?

One possibility is that risk adversion is driven by consideration of economic consequences. In a highly competitive market, the profit margin is steadily reduced for established products. Any error in the design or manufacture of a product will result in an increased cost, and will threaten the profitability of the manufacturer. Hence, it can be argued, any risk of error must be taken as a risk to profitability, and hence to the very survival of the firm. Although I can agree that this is a valid concern for risks with extraordinarily high consequences, such as risks which may result in liability, or those which could stop production entirely, I also perceive that the level of acceptable risk has been lowered much more dramatically than these increases in consequence can explain.

Considering the matter in more depth, I am lead to the conclusion that although the consequences of risks may be evaluated more highly in markets with narrow profit margins, it is in the evaluation of the risk probability that a greater overall increase has occured. What I have seen personally is a greater concern over whether engineers have an accurate understanding of the likelihood of a problem occuring. Particularly when an innovative approach to a process is suggested, with the goal of increasing profitability, there is a heightened concern over whether this innovation will fail. The level of proof required has risen dramatically. Although this is championed at times by managers as a sign of “maturity”, or as an approach more scientific than the former practice, I suspect what is really occuring is far different.

Where does the sense of greater probability of failure come from? It is based in the emotion of fear. And the source of the fear is very often stated bluntly – “how can we be certain? …we don’t know what might happen … you never know…”. The fear has an epistemological source. The quest for engineering certainty is found to be impossible to satisfy. It cannot be satisfied by any other means than through the application of reason, and reason has come to be doubted as a source of truth.

Report This Post

Engineering Bravery

On January 27, 1967, fire swept through the cabin of the Apollo 204 spacecraft (later re-designated Apollo 1) during a countdown test sequence, killing all three astronauts on board. An investigation followed, completedin April, 1967, and modifications made to the design of the Apollo Command Modulecabin and life support systems (replacing pure oxygen with a more natural mix of nitrogen and oxygen, plus other modifications). Three unmanned flights [not including Command Modules] were launched prior to the next manned mission, Apollo 7, launched October 11, 1968, 21 months after the Apollo 1 tragedy.

The next mission, Apollo 8, was originally planned to test docking of the Lunar Module and Command Module in Earth orbit; however, the Lunar Module was not going to be delivered on time. In August of 1968 it was decided that the Apollo 8 mission would be altered, rather than delayed, to circumnavigate the moon – instead of simply attaining Earth orbit. The mission plan was reworked in three months, and the launch occured on December 21, 1968 (and was an amazing success). The timing is important here – this change in mission occured before the launch of Apollo 7.

Apollo 13 launched on April 11, 1970. Two days into the flight, an oxygen tank exploded, and there followed 4 days of high suspense, resulting in the safe recovery of the crew. A review board was immediately assembled, and its report finalized in June, 1970. The launch of the next manned Apollo 14 mission occured on January 31, 1971 – only 9 months after the Apollo 13 episode.

On January 28, 1986, the Shuttle Challenger was destroyed shortly after liftoff, killing all 7 astronauts on board. The formal investigation was completed in June of 1986, and the next manned flight launched on September 29, 1988 – 32 months after the Challenger disaster.

On February 1, 2003, the Shuttle Columbia was destroyed upon re-entry, killing all 7 astronauts on board. The formal report on the accident was released in August 2003. The next manned mission launched on July 26, 2005 – 29 months after the Columbia disaster. Also in response to this accident, the retirement of the Shuttle program was announced, with a termination date preceding the expected date on which a replacement orbital vehicle will be qualified for use.

I realize these facts are very narrow in scope, and therefore are not authoritative evidence of a trend; however, I use them to illustrate what I contend is a more general change in the role of risk assessment in some fields of Engineering.In the case of NASA, it is very clear that the level of acceptable risk to human life has decreased dramatically since the start of manned missions, to the point now where the presence of any level of significant risk may result in the end of manned spaceflight for a lengthy period of time.

In the pharmaceutical industry, Government regulation directly controls the rate of progress, by demanding ever more stringent levels of safety before new products can be released for public use. In other areas of medical science, a combination of Government and insurance industry controls limit the rate of progress. Similar effects on the rate of progress can be seen in energy technology, the transportation industry, civil engineering (think of building codes), and now we see the beginning of these effects in information technology, with the rising concern over “security”.A valid concern over malicious attacks against high-value targets (the military, banking systems, personal information databases) has spawned increasing paranoia over attacks against individual, personal machines.

Thegrowth of what I call the Quality Industry is another strong trend toward risk adversion. The response to the Far East threat to American manufacturing has been a fascination with improving product “quality”, which can be interpreted as lowering the rate of defects in products. This thrust has taken several forms over the past 30 years, migrating from buzz word to buzz word. I have had the perspective of watching this trend progress at a single company for 20 years. Where we now stand, any defect – whether in the manufactured product, the process of manufacture, or the verification of tolerances – results in a formal “Corrective Action Request”. Each CAR is reviewed by the Quality department (which, of course, has a vested interest in the CAR process). Upon approval by Quality (and I’ve never heard of a CAR being rejected by Quality), each CAR requires a response, including root cause analysis, and a formal corrective action to ensure that the defect cannot occur again. These corrective actions are invariably in the form of additional cross-checking, institution of more manufacturing controls, all targeting a reduction in risk – and once established, these new rules are not to be overturned. What results (in our company in particular, but I am willing to generalize) is a gradual slowing of the manufacturing process, steadily increasing costs, and only marginal improvements in product quality.

In a separate post, I will examine the philosophical source of this trend.

Report This Post

The Role of Technology in Human Evolution

There is a rather popular conception that modern technology will result in the end or drastic curtailment of human evolution. With the advent of various medical improvements, those with conditions that would otherwise result in high morbidity prior to adolescence are now much more likely to survive to a reproductive age, and thereby will not be bred out of the gene pool. The result, it is argued, is that we will evolve into an increasingly weakened population, exhibiting a rapid increase in the severity and number of serious genetic conditions.

To counter this argument, we need to recall the unique nature of the human organism. Our primary survival skill is our rational faculty, and its ability to create increasingly advanced understanding of the Universe. This in turn allows the development of a steadily advancing technology. By “technology” is meant a set of tools allowing the modification of our physical world to better suit our needs and desires. While other animals can, in a very limited sense, alter their immediate environment to increase their likelihood of survival through the instinctive actions that they have evolved to exhibit, humans have a much more comprehensive ability to modify their survival environment. The beaver builds a dam to slow the flow of water, allowing the beaver to live in calmer waters. Man builds a hydroelectric power plant to provide electricity to thousands of homes, allowing a city to exist in a sub-freezing climate. Key to the human condition is the ability to pass acquired knowledge from generation to generation, allowing a continually accelerating advancement of knowledge, and commensurate with this rise in understanding, a continually rising level of technology.

Technology allows adaptation of the environment such that the average fitness of the individual in that environment is improved. Acting at a pace relative to which biological evolution is stationary, technology dramatically improves the likelihood of species survival, as it enables rapid adaptive change of the effective environment as the underlying environment changes. As an example, consider the advent of a lethal virus – for concreteness, say the “bird flu” did, as the media scaremongers are suggesting, turn into a human pandemic. When such events occurred in the pre-industrial period (e.g. the bubonic plague), huge segments of population were lost, and it can be envisioned that such an event could lead to rapid extinction, in a time period far too short to allow biological evolution to save the species. In the presence of modern technology, a vaccine can be expected to be developed in a matter of years, resulting in the preservation of a large, perhaps majority, segment of the human species. The effective environment for humans would thereby be modified to neutralize the effect of the bird flu virus.

This observation, however, does not nullify the process of evolution. Rather, it makes humans less dependent on it for survival, specifically with respect to events occuring much faster than evolutionary time scales.

To further illustrate the permanent presence of evolutionary processes, let us consider another (possibly fictional) concrete example. I will conjecture that in the 1800s, the occurence rate of dangerously high blood pressure, caused by genetic defects, in teenagers was much lower than at the present time. My hypothesis is based upon current medical practice, which includes early screening for high blood pressure, and available medication for its treatment. In the 1800s, with these practices not in place, morbidity rates for such a condition prior to reproductive age would have resulted in those genetic conditions being virtually eliminated from the gene pool. Today, the effective environment has changed to neutralize the effects of such a condition, and evolution has been “allowed” to produce an increasing population of individuals with adolescent high blood pressure.

Let us also suppose that at some date in the future, a societal collapse occurs, in which medical technology is no longer available for the treatment of this condition. The effective environment has now changed to put those with high blood pressure at much higher survival risk. In this event, the process of evolution will continue, and will rapidly remove the genetic variation that results in adolescent high blood pressure from the gene pool.

Report This Post