Author Archives: aturner

Imitation Game

We’ve just returned from watching The Imitation Game, starring Benedict Cumberbatch. This is a very well executed, and deeply themed film. The story of Alan Turing, the Enigma machine, and the cracking of its code by a small group of mathematicians and linguists during the Second World War through the use of Turing’s development of the first electronic computer, is well known to most computer scientists and mathematicians, particularly those of us working in the defense industry. The accuracy of the portrayal of this 3 year long story in a 90 minute film undoubtedly is imperfect, but the key elements of the reality of struggling with a massively important problem under severe time pressure are well-represented.

Lesser known may be the reality of personal isolation of the true intellectual. This isolation begins at an early age when the brightest are ostracized for being odd and socially awkward, as their mental strength focuses on learning and undervalues social interaction. Exposure to verbal or physical attacks from age peers who either do not value intelligence, or fear who they cannot understand, leads to a spiral of further isolation and further abuse throughout adolescence and into adulthood. This is exhibited extremely well in the film, and was not unnecessarily exaggerated.

But the true power of the film, reflecting the real world story, is the tragic destruction of this brilliant man, a war hero not of brawn but of mind, by a society that could not be allowed to know of his greatness. Even had his contribution not been hidden in military secrecy, the general public would not value his achievement specifically because it was a victory of mind and not metal. The unthinking hatred of deviancy that created the law by which Turing was destroyed is the true importance of this story. That hatred of the unusual may be disappearing in the form of homophobia, but it remains very potent toward anyone who rises to become “too” successful, whether in wealth, in business, or intellect.

The plot is uncomplicated, and the theme not at all hidden, but in a modern film presenting a difficult message depth and complexity are not to be expected. I strongly recommend this film to anyone past the age of 12, as long as the parents are willing to handle a conversation about homosexuality. This element of the movie is handled with unexpected tact, with no objectionable scenes or conversations. The one sexually-related (and completely unnecessary) comment that verges on the tactless is presented so obliquely that I didn’t even recognize it for what it was for several seconds after it had been delivered.

In summary, this is not a movie to miss. One more positive, in my opinion, is that unlike the similar story portrayed in A Beautful Mind some years ago, Turing is not a neurologically flawed genius. He is a brilliant engineer (not a scientist) who applies his skill to an immediate and real-world problem, and wins a war of minds, thanklessly.

Report This Post

Douglas Adams

I took a very long break, but I’m hoping to be back. No promises or excuses offered.

This review is admittedly a bit out of date. I recently read the entire collection of science fiction written by Douglas Adams, who died quite young in 2001. I had read his first    four books back when they were current (The Hitchhiker’s Guide to the Universe, The Restaurant at the End of the Universe, Life the Universe and Everything, So Long and Thanks for All the Fish). I recall them as being hilarious, but I wasn’t reading them with the same mind I now have.

The books are still hilarious. They are also written by a genius. However, they become disappointing.

Adams writes in the introduction to this compendium of 5 books that the story they tell   is self-contradictory, inconsistent, and basically just a mess in terms of organization. Furthermore, he points out that the radio show, the television show, and the eventual movie all disagree with one another and with the books. Interestingly, that really just expands upon the actual theme of his work.

The evidence of genius runs throughout his work. The humor is multilevel – enough of the silly and the nonsensical to make a child laugh, plenty of content to entertain the geeky teen, but also some rather deeper material that leaves the mathematician and the physicist both laughing and yet turning over the humor later in their mind and expanding upon the consequences of the ideas Adams throws out.

An excellent example is the Improbability Drive that powers one of the key spacecraft. The idea here is that due to quantum physics, the actual position of anything is only a matter of probability. I am most likely here in my house, but there is a finite, non-zero, but ridiculously small chance that I am actually in your living room. So, Adams imagines, suppose there is a fifth dimension of the Universe in which probability can be manipulated. And suppose we had the ability to travel in that dimension. If we knew the actual probability that I am in your living room, then we could travel the probability dimension until we were at that value, and suddenly I would find myself in your living room.

The idea does not hold up to close examination, but here are the implications used in the books. When we run the Improbability Drive, all sorts of insanity happens, and Adams uses that insanity to create masterful humor. Whales appear in mid atmosphere, people change into penguins, that kind of thing. But he can do more with this craziness. If we even have a limited ability to manipulate probability, as an early step in improbability research, we need only have an estimate of the probability of discovering how to create an improbability drive, and we can just move to that point on the probability dimension,  and we’ll have one!

I’ve had a similar notion about building a time machine, at least one that travels backward in time, and I’ll use it to prove that backward time travel is impossible (or, stealing Adams concept, at least extraordinarily improbable). If such a thing could happen, man will build one. If he builds one, he will use it. If it is used, it will travel backward in time. If it does so, man will see it and copy it to make a time machine.  In that way it creates itself. Since we don’t have time machines by now, they cannot be possible.

Adams also deals with the implications of time travel. Rapid and easy location travel has gradually homogenized cultures on Earth (not as much as some of us would wish, but it is true that the whole world knows McDonald’s and Coca Cola). Of course Adams immediately extends that to space travel making the Universe homogeneous (very inconsistently expressed throughout the books), but he also extends the idea humorously to time. He creates a society for the protection of history from time tourists, and laments that everywhen is beginning to be the same.

This kind of insight mixed with outrageous humor is Adams at his best. Another example is bistromathematics, fueling another form of spacetravel using the mathematics involved in settling a bill, including the tip, of a party of engineers eating in an Italian restaurant.  Add in the larger plot, if we can really call it that, of the Earth being an organic computer run by mice, who are really 5-dimensional beings, to answer the question of what is the ultimate question of Life, the Universe, and Everything (the answer is known to be 42, but they don’t know the question), and mix in Gods last message to Man: “We apologize for the inconvenience”, and you see what we are dealing with from the mind of this genius.

But ultimately the very structure of the humor undermines the entire work. The theme is the built-in engine of the Universe being the source of nonsense. Adams universe has no rules, no order, no physics, no primacy of existence. As the end of the work approached, I was becoming uncomfortable with the idea that it may not come to any conclusion at all. Because Adams had died young, I was preparing myself to accept a non-ending. I have very deep ties to reading that engages me. I do not like endings that. leave loose ends, or that end miserably.

But Adams did not leave loose ends, at least not the bigger ends. Instead he took the very easy way out in the end, which is arguably the only clean ending nonsense can have. Everyone dies, and the great problem of the work dies with them.

So I do recommend the book. Mostly for teenagers, though the geeky adult will enjoy it as well. The geeky philosopher will, howeverm, get a great deal out of it, though it will wind up in the category of philosophical tragedy.

Report This Post

Scientific Hypothesis, Theory and Law

A class member recently asked me what the difference is between a theory and a law.  I believe this is a very common confusion, usually stemming from a popular view that a scientific theory is “just a theory”, implying it is less than likely to be true.  Similarly a “law” is loosely viewed as some form of universal and inviolable truth.

Here is my understanding (after some reflection) on what these terms mean.  Let us start with a sketch of what human knowledge is and the process by which it is accumulated.  I will preface what follows by saying that I am discussing what is commonly called “scientific knowledge”, that is, knowledge obtained through the observations achieved with our human senses, and then acted upon by human reason.  This statement does not exclude the use of instruments with which we may collect information, but we ultimately receive that information into our rational mind through our senses.

The senses present information to our minds. Our minds then group these sensory perceptions into clusters of similar observations.  When enough similar observations have been made to understand what they have in common, our minds replace the individual observations with a concept. A concept is a mental representation of a group of similar things which share common defining features. For example, after observing a series of objects which have four wheels, are self-propelled and carry individual or small groups of people over a paved surface, we will replace those observations with the concept “car”.

The detail of this process of concept formation is a major topic in itself which could fill many pages.  It applies not only to simple concrete objects, but also to ideas at all levels of abstraction.  Not only the concepts of “apple”, “dog”, and “house”, but ideas like “love”, “algebra”, and “gravitation” as well.  The concepts that are not directly reflected in objects we may refer to as abstract concepts.  The observations leading to these concepts are our recognition of other, lower-level, concepts.  So, for example, the concept of “fruit” comes from recognizing that the concepts of “apple”, “orange”, “banana”, and so on share common features.

When we start observing the relationships between concepts, and in particular relationships of cause and effect, we have reached the starting point for the creation of scientific theory.  A phenomenon is observed requiring explanation – the cause for the observed effect is sought.  After observing some number of occurrences of the phenomenon (A), we may find a common event that occurs at or before each occurrence (B).  From these repeated observations, we can form a hypothesis that B causes the occurrence of A.  At this level of development, we may have many different hypotheses attempting to explain the same phenomenon.

Here is an example.  A hot plate of glass dropped into cold water will shatter.  After observing this to occur for a small number of identical cases, we develop the following hypotheses to explain the breaking of the glass:  (A) Water breaks glass.  (B) The impact of the glass onto the surface of the water breaks the glass.  (C) A chemical reaction occurs between hot glass and cold water which breaks the glass.  (D) The difference in the temperature between the glass and the water causes the glass to break.  Each of these hypotheses is valid based on the observed events, as possible explanations for what has been observed.  They are, however, only hypotheses at this point because they have not been tested, nor compared carefully against the rest of our knowledge about the world.

We next perform some experiments to attempt to validate each of these hypotheses.  Experiment (1): We take a hot glass plate and drop it into cold gasoline; the glass shatters.  Experiment (2): We slowly pour cold water over a hot glass plate; the glass shatters.  Experiment (3): We drop a room-temperature glass plate into cold water; the glass does not break.  Experiment (4): We place a hot plate in a cold freezer; the glass shatters.

Experiment (1) makes hypotheses (A) and (C) unlikely.  Experiment (3) confirms that hypothesis (A) is not correct.  Experiment (2) makes hypothesis (B) untrue.  Experiment (4) indicates that (C) is even less likely, and confirms hypothesis (D).  Now, after a set of carefully designed experiments have been completed, hypothesis (D) can be considered to be a theory instead of a hypothesis.  The difference between a theory and a hypothesis is the observation of experiments which confirm the hypothesis, while eliminating alternative explanations.  Typically scientific experiments are designed and controlled to progressively narrow the number of alternative explanations by attempting to change one related variable at a time.

At this level of verification, we have a scientific theory, but not a law.  Comprehending the difference requires an understanding of how human knowledge is properly accumulated.  To be accepted as scientific knowledge requires that not only is the theory seen to be true experimentally, but that it is logically consistent with the current body of accumulated scientific knowledge.  This generally requires the completion of two processes.  The new theory must not be contradicted by the existing scientific knowledge, and it must be shown that either the theory fundamentally expands upon the existing knowledge –  it is not logically related to the existing knowledge – or it can be explained using the existing knowledge.

These conditions may seem to preclude revolutionary scientific advances – the Copernican revolution, or Einstein’s theory of gravitation for example.  It may similarly seem to contradict my conviction that the existing regime of “modern physics”, based upon the quantum theory and the Standard Model, will be overturned in the future.  However, this is not at all the case.  The presumption of the scientific method as outlined is that it has been consistently followed throughout the development of the existing body of scientific knowledge that the new theory appears to challenge.  But there have been large segments of scientific development that have failed to adhere to the scientific method.

The most common error within the historical body of scientific knowledge is the failure to base science solely upon observed facts and data.  Far too often some other, non-scientific, source of information has been used to set the groundwork upon which a scientific theory is constructed.  Historically, the most common non-scientific sources have been religious.

The geocentric theory to explain the motions of the celestial bodies was originally based upon a mixture of observation and assumptions about the nature of the astronomical objects involved.  Basic observations would suggest that the Earth lies motionless with the celestial objects rotating around it daily; however, when details of the motions of the planets and the Sun were measured with increasing accuracy, the defense of the geocentric theory rapidly turned from being based upon observations to being based upon mystical beliefs in the divinity, and therefore perfection, of the celestial objects.  Bruno, Copernicus and Galileo did not need to defend their theories against science, but against the Church.

The mystical thread in the development of astronomy did not end with Galileo.  Kepler accepted the Copernican system with great reluctance, and hampered his own success in determining the actual motions of the planets by adamantly insisting that the orbits were circular, because of the presumed divinity of the planets and the necessary perfection of their motions.  During his struggles to solve the problems of planetary motion he oscillated between difficult scientific investigations and lengthy whimsical fantasies based purely upon a quest for religious revelation.  Even Isaac Newton, inventor of calculus and the “law” of gravity, believed the planets to be living beings, and presumed the existence of a Prime Mover.

Einstein’s theories – both special relativity and general relativity (a new law of gravitation) did not invalidate prior science, but rather were revolutionary for dramatically expanding upon existing theory without creating contradictions.  The general theory of relativity in particular was a re-statement of existing theory from a completely new perspective, which then allowed an enormous expansion of the ability of science to explain phenomena which had been observed and had been difficult or impossible to explain within the existing framework of physics.

But modern physics is far from having removed the effects of non-scientific thinking.  Quantum mechanics, developed in the 1920’s and 1930’s, sought to explain recently observed facts and data, but did so starting from outside existing scientific theory.  The origins of the quantum theory lie in a new set of assertions not derived from existing science, and lying in direct contradiction to the most fundamental assumptions of physics.  The new theory is capable of explaining a vast variety of experimental observations – as such it does qualify as a theory using the definition I have stated earlier.  However, all attempts to resolve the enormous contradictions that quantum theory brings against the fundamental elements of science have ended in failure, or emphatic denial that the contradictions need to be resolved.

There are two core contradictions between quantum mechanics and the rest of established science; both are severe and extraordinarily fundamental.  Science presumes that every entity in the universe has definite and precise quantitative features.  An object has a location in space which it occupies.  An event occurs at a particular moment in time.  Man and therefore Science may be limited in its ability to know the values of these aspects of an object – and the limitation may even be a fundamental limitation which is physically impossible to overcome.  But quantum mechanics denies that there are specific values for some attributes (such as position and time), and that this lack of specificity is not a lack of knowledge, but that no specific values actually exist.

Science also presumes that all phenomena are caused by other phenomena – the law of causality.  Quantum mechanics explicitly denies this, and replaces the law of causality with probabilistic rules.  Quantum events are fundamentally uncertain and undetermined.  Attempts to develop interpretations for quantum mechanics that allow for the law of causality to be retained evolve into a variety of absurdities, the best known being the “many universes” interpretation, in which every event causes a divergence into multiple universes dependent on the actual outcome – many in this sense is an insanely large number.

Returning to our main topic, a theory that does successfully integrate into the existing body of (truly) scientific knowledge, is shown not to be in contradiction with that knowledge, and which is either explained within its context, or advances a new viewpoint which expands the basis for scientific knowledge will attain the designation of a scientific law.  A law in this sense implies that its falsification would entail a major disruption in our most fundamental understanding of the meaning of science, and is therefore scientifically impossible.

Report This Post

Does Pi Exist?

To almost everyone this would seem a ridiculous question. Even to most mathematicians the existence of Pi (defined here as the ratio of a circle’s circumference to it’s diameter) is taken for granted. But the question of the existence of the class of number to which Pi belongs is a key example of a fundamental question in the philosophical understanding of Mathematics, and indeed of a broader issue in metaphysics and epistemology.

To understand the meaning of Pi, let’s start with a quick review of the types of numbers we encounter in elementary mathematics. We start with the “natural numbers” – the numbers 1,2,3 and so on. The very fact that these are the numbers we first learn and first teach to our children in their infancy is ultimately the key to understanding the issue in front of us. In the natural numbers, there are no negative numbers, and no zero.  The extension of the number system to include negative numbers I’ve discussed previously.  The concept of zero I’ll defer yet again, though I’ll hint that I believe it represents balance. 

 Both natural numbers and integers are “countable” – meaning that they can be ordered and indicated in order.  Considered as sets, they both contain a “countable infinity” of elements. 

The next extension of the number system is to include all fractions – the division of natural numbers by natural numbers, and while we’re at it we throw in division of signed integers by signed integers other than zero.  This new number system we designate as the “rational numbers”.  The natural number or the signed integer “n” is represented in the rational number system as ratios n/1, 2n/2, 3n/3, etc.  (Note that I say the signed integers are “represented” – I maintain that the signed integer n is different than the rational number n/1, 2n/2, …).   The nature in which rational numbers exist is again the topic for another discussion, though I’ll note that their nature is related to the epistemological process of measurement, and the mathematical concept of continuity.  I’ll also note that the rational numbers can be so ordered as to be countable, and so are “countably infinite” in number.

The next step is a bit tougher to follow.  The “algebraic numbers” are defined as all numbers which are solutions to polynomial equations whose coefficients and exponents are natural numbers.  In a simpler form, it is this number system that sparked the first significant philosophical debate about the meaning of numbers – back in Greece in the time of Pythagoras.  What we now refer to as the “Pythagorean Theorem” states that the square of the length of the hypotenuse of a right triangle is equal to the sum of the squares of the other sides, or a^2+b^2=c^2, where c is the hypotenuse.  In its simplest form, if we are computing the length of the diagonal of a square with unit length sides (a=b=1), then c*c=2.  That the value of c, the square root of 2, cannot be expressed as a fraction of natural numbers was a major discovery of Greek mathematics. (This fact was proven by members of the Pythagorean school, possibly by one Hippasus who may have been executed for revealing the proof because it had been held as a state secret).  A good deal of debate occurred through the centuries over the existence of these “irrational numbers”.  I have not spent enough time considering the nature of these numbers to offer an opinion as to what they represent, but I will venture that this number system will be found to have an acceptable basis in reality.

As with the rational numbers, the algebraic numbers can be ordered in such a manner as to be countable.  There are therefore a countable infinity of algebraic numbers.

Both the rational and algebraic numbers can be shown to be “dense” on the real number line.  Think of the real number line as an “infinitely long” straight line, as you did in high school.  If we take any interval of this line, there will be contained in that interval an infinite number of rational numbers, no matter how small an interval is chosen.  This is the density property of a set of numbers.  This observation, in combination with the fact that all of these number classes are “countably infinite” leads to some confusing conclusions.  In the sense that we can order and count the elements of these number classes, we can establish a “one to one” relationship between all rational numbers with the natural numbers.  “In a sense”, this implies there are “as many” rational numbers as natural numbers – despite the fact that between any two natural numbers there are an infinite number of rational numbers!  The resolution of this confusion lies in the use made here of the concept “infinite” – which ultimately is not a valid concept.  But that is another digression.

The last development in this trail of number classes is the discovery of transcendental numbers.  The historical development of this number class derived from the problem of “squaring the circle”, which was a phrase used to describe the problem of computing the area of a circle with a rational-valued diameter in terms of algebraic numbers.  It was shown in 1882 that this is impossible, because Pi is not the solution of any polynomial with natural number coefficients.  Pi then belongs to yet another class of number.  Rather than deal with the area problem, let us define Pi in the simplest possible manner.  Pi is the ratio of the circumference of a circle to its diameter.  Hence, if a circle has a diameter equal to a natural number of units (or a rational, or even algebraic number of units), its  circumference will be a transcendental number. 

Using this definition, we have for the first time a number tied to a perfect geometrical figure.  A circle is an ideal figure that can never be found in the concrete world of objects.  Any “circle” you encounter in the physical world will be imperfect at some level.  “Circle” is a idealization of these experienced real-world circles.  It can be thought of as the limiting perfection of all physical circles.  And so, the question of whether Pi “exists” will become the question of whether this idealization can be said to exist.  More generally, it is the question as to whether a conceptual idealized abstraction can be said to exist.  Such an abstraction can never be exhibited in the physical world.

The answer to this question lies in the fundamental philosophical principles that are applied.  The empiricist will require that to “exist”, a thing must be present in the physical world, and furthermore, must be experienced.  Such an approach will then conclude that ideal circles do not exist, and therefore Pi does not exist.  The rationalist/Platonist will be happy to place the idealizations in their own realm (ultimately a realm occupied by a God), and declare that they in fact exist in this realm, but can only be approximated in the denigrated physical realm.

Finally, the Objectivist will answer with clarity.  The concept circle – as with all other human concepts – exists as a relationship between the human mind and the physical world.  To exist, the concept must have physical representations from which the details specific to each (such as imperfections) have been abstracted away (omitted as measurements) to arrive at the concept in human consciousness.   The concept circle then exists as an epistemological linkage between human understanding of the physical world, and the real world itself.  And since the circle exists, it is in the same sense that the number Pi (and ultimately all transcendental numbers) exist.

The acceptance of transcendental numbers leads to additional confusions involving the invalid concept of infinity.  Unlike the other number classes, it can be proven that there is an uncountable infinity of transcendental numbers.  A one to one relationship between the transcendental numbers and the natural numbers cannot be established.  Indeed between any two rational numbers (or even algebraic numbers) there is an uncountable infinity of transcendental numbers.  (Intriguingly, a one-to-one relationship can be established between transcendental numbers and the set of all subsets of natural numbers, which in turn implies that the set of all subsets of natural numbers is uncountably infinite as well). 

The further discussion of how these “infinities” and the concept of continuity and “infinite divisibility” should be approached to allow a resolution of the apparent contradictions will need to wait for another time.  Interested parties should consult writings and lectures given by Pat Corvini for excellent discussions of these topics.

Report This Post

“Bloody Bad Science”

I’ve just completed “reading” (listening to) The Black Cloud, by astronomer Fred Hoyle. It has been refreshing to discover a book – science fiction at that – which I can enthusiastically recommend to almost any reader. Hoyle was a successful astronomer who turned to writing science fiction later in life. Hoyle’s science fiction emphasizes the science more so than most authors of the genre, and in particular almost all modern authors. The Black Cloud describes the proper process by which scientific discovery and inquiry advances. In particular, for those interested and knowledgable in astronomy, it is a wonderfully accurate depiction of the science as it was practiced in the 1950s. My fascination with the book also derives, in part, from being old enough to remember at least some of the technology that Hoyle describes, though I was experiencing it much later, in the 1980s, not the 50s.

The story in the book (which I won’t spoil here) does get sufficiently unusual to easily classify as science fiction, but continues to bring in fascinating philosophical sidelights. Unlike most science fiction, I’d even go so far as to say there was no obvious lack of plausibility in the story line – though I’m sure if I thought about it harder there would be plenty of mud to throw, it is fiction afterall. For the faint of heart, I’ll warn you that many people die, though the description of the calamity is not particularly detailed – I might even classify it as “callous”. But for Hoyle, the story is only a vehicle to make some very deep points about the philosophy of science, the nature of information, the nature of life, and even voice some frustrations about government.

What I want to discuss here are the thoughts he provides on the nature of science – I may very well come back to other thoughts he expresses in this wonderful book in later discussions. The point he makes (only stated openly twice in the book, but demonstrated continuously) is that observations of correlations or coincidences are not a proper basis for science, and do not represent causality. This is certainly not a new sentiment presented by Hoyle, but it is one that is essential to grasp, and which is of increasing relevance in what passes today as “science”.

The purpose of science is to determine the causes of phenomena. By a cause we mean an antecedent particular entity or event whose existence results in the existence of the phenomenon. The determination of cause is a structured activity, involving observation, formulating a hypothesis, prediction, and verification.

The correct progression in the development of a scientific theory starts with observation of the phenomenon under study. The researcher may then find correlation between the phenomenon and other entities or events. This must lead to the formation of a hypothesis which explains the connection between these antecedents and the phenomenon, and this explanation must be formulated within the context of existing knowledge. That is, the hypothesis cannot be arbitrary, whimsical, or rely upon unknown forces or entities. The hypothesis must arise from inductive reasoning from the observations made of the phenomenon and its antecedents. Any valid hypothesis must further be capable of making predictions which can be subsequently verified.

Hoyle’s commentary specifically deals with the critical step of when a hypothesis becomes a theory – when it is accepted as an explanation, and the cause of a phenomenon established. Hoyle points out that the mere observation of correlation is never sufficient to establish a connection; furthermore, that attempting to “prove” the hypothesis by deductive reasoning using the correlation and hypothesis itself as a starting point leads to completely invalid chains of logic, and cannot bring additional validity to the hypothesis. The only way to validate a hypothesis is to use it, in conjunction with other observations if necessary, to make predictions of future observations, and then to make those observations and confirm the predictions.

Let’s have a look now at the actual text from The Black Cloud. It is additionally interesting how Hoyle manages to present his ideas in very short passages. He further shortens them by using the Russian character “Alexandrov”, who speaks very tersely. After a few comments from Alexandrov that only later become interesting (and to which I’ll return), the first example of Hoyle’s attack on poorly constructed science arises after the astronomers observe a strange behavior in how the Cloud reacts to block radio transmissions. After reviewing the actual observations, the scientists begin hypothesizing a feedback mechanism, but without explaining how it would work. Here is the relevant passage:

“Let’s go into this in a bit more detail,” … “It seems to me that this hypothetical ionising agency must have pretty good judgment. Suppose we switch on a ten centimetre transmission. Then according to your idea, Chris, the agency, whatever it is, drives the ionisation up until the ten centimetre waves remain trapped inside the Earth’s atmosphere. And — here’s my point — the ionisation goes no higher than that. It’s all got to be very nicely adjusted. The agency has to know just how far to go and no further.”

“Which doesn’t make it seem very plausible,” said Weichart.

“And there are other difficulties. Why were we able to go on so long with the twenty-five centimetre communication? That lasted for quite a number of days, not for only half an hour. And why doesn’t the same thing happen — your pattern A as you call it — when we use a one centimetre wave-length?”

“Bloody bad philosophy,” grunted Alexandrov. “Waste of breath. Hypothesis judged by prediction. Only sound method.”

The next outburst by Alexandrov follows a comment about ESP experiments:

“I know this is rather a red herring, but I thought these extra-sensory people had established some rather remarkable correlations,” Parkinson persisted.

“Bloody bad science,” growled Alexandrov. “Correlations obtained after experiments done is bloody bad. Only prediction in science.”

…“What Alexis means is that only predictions really count in science … It’s no good doing a lot of experiments first and then discovering a lot of correlations afterwards, not unless the correlations can be used for making new predictions. Otherwise it’s like betting on a race after it’s been run.”

The final passage uses an example that can easily be remembered. This both reinforces the theme discussed so far, and suggests another variant of the theme. The situation leading to this passage requires some explanation. The governments of Earth had launched nuclear missiles into the cloud. These had been redirected by the Cloud to return to their points of origin, with “random perturbations”. Three major cities had been destroyed.

“It looks to me as if those perturbations of the rockets must have been deliberately engineered,” began Weichart.

“Why do you say that, Dave?” asked Marlowe.

“Well, the probability of three cities being hit by a hundred odd rockets moving at random is obviously very small. Therefore I conclude that the rockets were not perturbed at random. I think they must have been deliberately guided to give direct hits.”

“There’s something of an objection to that,” argued McNeil. “If the rockets were deliberately guided, how is it that only three of ’em found their targets?”

“Maybe only three were guided, or maybe the guiding wasn’t all that good. I wouldn’t know.”

There was a derisive laugh from Alexandrov.

“Bloody argument,” he asserted.

“What d’you mean ‘bloody argument’?”

“Invent bloody argument, like this. Golfer hits ball. Ball lands on tuft of grass — so. Probability ball landed on tuft very small, very very small. Million other tufts for ball to land on. Probability very small, very very very small. So golfer did not hit ball, ball deliberately guided on tuft. Is bloody argument. Yes? Like Weichart’s argument.”

“What Alexis means I think,” explained Kingsley, “is that we are not justified in supposing that there were any particular targets. The fallacy in the argument about the golfer lies in choosing a particular tuft of grass as a target, when obviously the golfer didn’t think of it in those terms before he made his shot.”

The Russian nodded.

“Must say what dam’ target is before shoot, not after shoot. Put shirt on before, not after event.”

“Because only prediction is important in science?”

“Dam’ right. Weichart predicted rockets guided. All right, ask Cloud. Only way decide. Cannot be decided by argument.”

I will return to discuss the new aspect which I believe Hoyle has introduced in this particular passage in my next post.

Report This Post

On Probability

Over this winter vacation, I’ve been intellectually focused on some background mathematics supporting my work in detection. One of my projects is reading through a rather introductory text on probability by Athanasios Papoulis (1965). I’ve found the opening sections of this book very philosophically relevant, in addition to being a promising text book – if I can ignore the philosophy being presented.

Papoulis opens the text with a very blunt statement, which instantly alerts me to the conceptual framework in which this mind is operating: “Scientific theories deal with concepts, never with reality”. What follows in his introduction is a justification for approaching the subject purely from a deductive process starting with a set of axioms, and not worrying – much – about the relationship between the theory and “the real world, whatever that means”. Among the blunt statements in the introduction is this:

To conclude, we repeat that the probability P(A) of an event A must be interpreted as a number assigned to this event, as mass is assigned to a body. In the development of the theory, one should not worry about the “physical meaning” of P(A). This is what is done in all theories.

After this very open confession of intellectual sterility, the author gets down to work by asking how probability – which he has just declared to be arbitrarily defined numbers – should be defined. He offers three strawmen definitions, and then settles on a fourth. The strawmen are of some interest as well:

(1) Relative Frequency Definition. This is probably the definition most commonly assumed – that probability is the ratio of the number of outcomes of interest, divided by the total number of trials. This needs to be more carefully constructed as the limit of this ratio as the number of trials “goes to infinity”, or, as I’ll even more carefully say it, as the number of trials becomes arbitrarily large. Papoulis rejects this approach as too cumbersome, because no matter how many actual trials are made, we never approach “infinity”, and therefore can never say that we have a sufficient number of trials to establish meaning for this ratio.

(2) Classical Definition. Here the probability of an event is determined by a-priori reasoning about the situation at hand. For example (Papoulis’ example) a six-sided (fair) die has six equally possible outcomes, so the probability of getting a one on a die roll is 1/6. No experiment need be made to reach this (rational) conclusion. This strawman is knocked down by stating that this really only can work for simple cases, and works most easily only when the outcomes are of equal likelihood. A couple examples are given to show that determining the proper probability in this manner is very error prone. For example, one could attempt to determine the likelihood of rolling a 2 with two die by saying the number of possible outcomes is 11 (2,3,…,12), and we are interested in the value 2, so the probability is 1/11, which is of course wrong.

(3) Measure of Belief. This one seems to be thrown in as an easily dismissed psychologically based argument.

Papoulis chooses to define probability from three axioms, and claims to then proceed to develop the entire theory of probability from these axioms (plus one more minor extension). The axioms seem ridiculously primitive:

The probability of an event A is a number P(A) assigned to this event. This number obeys the following three postulates, but is otherwise unspecified:
I. P(A) is positive or zero
II. The probability of the certain event is 1 [the certain event always occurs]
III. If A and B are mutually exclusive [they both cannot happen in the same trial] then P(A+B)=P(A)+P(B). [The probability that A or B happens is equal to the sum of the probability that A happens and the probability that B happens].

And that’s it! That’s his “definition” for probability, which clearly lacks any meaningful tie to the “real world, whatever that means”. So he can proceed in philosophical comfort.

Or can he?

Interestingly, in order to start his progression from these axioms, he needs to introduce a large segment of set theory – otherwise he cannot define what an “event” is, which is contained in this definition of probability. Without belaboring set theory here, an “outcome” is one possible result of a trial of the process we are trying to test. The set of all possible outcomes is the “certain event” mentioned in the definition – one element of the certain event will always be the outcome of any trial, so the probability of the certain event is 1. An “event” is then any subset of the set of all possible outcomes.

Next, we encounter a very bizarre twist in this “axiomatic” probability theory. Not all events, says Papoulis, can have a probability assigned to them. That is, not all sets of possible outcomes can be given a number that will meet the axiomatic conditions from which the theory of probability will be developed. Papoulis does not confront this problem directly, treating it as merely an annoyance, and refering the reader to measure theory for a better explanation, but he does give a major example to indicate where the problem lies. Suppose the outcome of a process that we are interested in could be any real number (real numbers are all of the common numbers from “negative infinity” to “postive infinity”, including all rational and irrational numbers). Then the “certain event” is the set of all real numbers. But consider the event which is the set consisting of a single number, say the set {3}. If every set of this type is given a probability, there will be an “infinite” number of these probabilities, and in order for axiom III to remain true, each of these probabilities would need to be zero. Then for any specific outcome of a trial of the process – A -, the event {A} would have a probability zero – which is a clear contradiction.

The escape from this contradiction is itself quite bizarre, and only partially explained (we are referred to measure theory for a more complete explanation). For this example, only events that can be formed from the union or intersection of a countable number of continuous intervals or isolated points will be given a probability. There are sets that cannot be so formed (we are told they are complicated to construct, and I recall similar constructions from my days of formal math training, and I agree with him), and these will not be given probabilities.

If this escape seems to make little sense – and Papoulis seems to understand that the reader will not be able to make sense of this – he offers a better escape: “…one can construct certain pathological sets that are not countable intersections or unions of intervals. Sets of this kind have no probabilities, but are of no importance in applications, and we can forget them“.

Now this is a simply amazing demonstration of a rationalist getting boxed into a corner, and then escaping inelegantly by refering us back to the real world (whatever that means) which he has already denied can be consulted in developing a proper mathematical “theory”.

Report This Post

Harriman and Galileo

This Fall my reading has centered on David Harriman’s recently published “The Logical Leap: Induction in Physics”. I started off listening to the book on Audible, then realized that there was too much detail I was missing, so I went ahead and purchased a hard copy. After a great deal of reflection on the book, I’ve reached the conclusion that this is a very significant philosophical advance, largely derivative from Ayn Rand’s epistemology. The main thesis in the work consists of several observations regarding how the proper process of induction is performed, and how following this process leads to the attainment of truth. The presentation of these ideas is accomplished through relatively brief explanatory material in the opening and closing chapters of the book, and through a lengthy series of examples from the history of science through the central mass of the work.

There has been much commentary on the accuracy of the presented history, and whether the errors in this history result in an invalidation of the theory of induction Harriman presents. This lead me to review original (translated) material from Galileo in particular to judge the accuracy for myself. Indeed, I found clear errors in Harriman’s account of some of Galileo’s work on falling bodies, matching the factual criticisms by John P. McCaskey. I am also generally skeptical of anyone who attempts to describe the thought processes that lead a researcher to perform various experiments and reach conclusions. In the case of Galileo, my skepticism is raised because of the stylized nature of Galileo’s writings (in artificial dialogues), where the actual thought process is not likely to be the thought process presented in the work. However, Galileo was very explicit about his theory of knowledge and support for proper scientific method in other writings. Though Harriman may take some license with his portrayal of the details of Galileo’s thought process, he does portray Galileo’s underlying theory of knowledge perfectly.

The errors Harriman makes in describing Galileo’s falling body experiments center on indicating that Galileo had not conducted experiments in an aqueous medium, and that if he had he would not have been lead to the induction of universal gravitational acceleration. However, Galileo describes such experiments in detail, and in fact used the results of these experiments to inductively conclude that the acceleration of falling bodies in a vacuum is independent of the falling body’s mass. Although there is a curious confusion here in Harriman’s account (given the breadth and depth of Harriman’s research into the history of science), the error is not essential to Harriman’s evidence for the nature of the inductive process. I conclude therefore that his inaccuracies do not affect the validity of Harriman’s theory of induction.

I still have not read through the original works of the other scientists (and proto-scientists) that Harriman uses as examples, though I do have selections from most of them (Ptolemy, Newton, Lavoisier) and I will eventually read through this material. I am willing in the meantime to accept Harriman’s accounts as representing at least the “essence” of their thinking.

Regarding Galileo himself, I have greatly enjoyed reading both of his Dialogues – On the Two New Sciences, On the Two World Systems, as well as Letters on Sunspots, The Assayer, and some of his other letters dealing with the relationship between science and the Church. Throughout these, we see Galileo declaring the proper source of scientific truth – induction from observation – and disdaining the peripathetic argument from the “authority” of Aristotle. In several places, Galileo states that Aristotle himself would change his conclusions if he were presented with the observational evidence available to Galileo. Since I am also a great proponent of Aristotelian logic (though not his “science”), I found these statements gratifying.

Galileo’s struggle with the Church is fascinating. Far from being an atheist, he defends his “freethinking” by relying upon other ecclesiastic authorities to make his argument, in particular Augustine. Augustine I had written off as the worst of the Christian “philosophers” (and such he remains, from an ethics viewpoint); however, he offers at least a partial defense of science by separating matters of faith from matters of fact. In matters of fact, says Augustine, the Bible should not be interpreted literally, and as we discover new explanations for phenomena through the use of logic and observation that are at apparent odds with scripture, it is our interpretation of the scripture that should be questioned and changed, not the use of logic that should be abandoned. Another form of this argument suggests that since our rational faculty is a given to us by an perfect God, both its use and scripture must lead to Truth. When those truths conflict, it is our faulty understanding of scripture (which is the word of the unfathomable) which is in err. Of course both of these approaches to balancing faith and reason are ultimately fatally flawed, as no definition of the boundaries of the arbitrary field of “faith” can be described.

I would enjoy reading the entire body of Galileo’s work – of which an immense quantity apparently exists, filling many volumes. But, amazingly and disturbingly, the vast majority has never been translated into English. There is an apparently comprehensive translation from the Italian dialects in which he wrote to French, but no one has found a reason to translate most of the work to English. Given the astounding range of Galileo’s contributions (astronomy, mathematics, mechanics, dynamics, fluid dynamics, meteorology, not to mention the philosophy of science) this is a disturbing fact.

Report This Post

Recent Reading, and Reawakening

It’s been far too long since this space has been active, though you’ve heard that sad tale before. The best I can do for tonight is list some recent reading and provide shallow commentary. Much of my free time has gone into teaching Astronomy, and the rest into developing the hobby of astrophotography. I’ve still maintained a reading schedule, but without much reflection on content.

After completing Durant’s Age of Faith this spring, I read some minor fiction and then moved into a reading of an old textbook, The Use of Force, which is a large collection of essays regarding foreign policy, culminating in the Cold War tactics of the 1980s. This reading was mostly of historical interest, bringing out tidbits such as the dreadnought arms race prior to WW1, and some fascinating history of the development and very slow adoption of technology in armed forces. It also pointed out my lack of knowledge of the Cold War era.

I also read a history of Spain, though that was largely forgettable – the book itself was too superficial, and written to long ago to cover the Fall of Franco.

More recently, I read most of “Conceptual Foundations of Scientific Thought” by Marx Wartofsky. This work was of some interest, though the author focused on pragmatic interpretations of the philosophy of science, with an emphasis on linguistics. Eventually I stopped reading at the final section dealing with “modern physics” as it was sure to become intolerably painful.

I’ve also read through a few science fiction works, including Stranger in a Strange Land, which has been entertaining up until the final parts of the story, and the Mote in Gods Eye pair of books by Pournelle, which were a bit more entertaining.

One obscure book I just read this weekend was “Glide Path” by Arthur C Clarke. Not quite an SF story, this is a fictionalized account of the development of ground tracking radar in WW2. Mildly interesting, but really only for real nerds. Not much of a plot, nor character development.

I am planning to return to the Objectivist canon next, though I’m currently entertaining myself with Dashiel Hammett.

Report This Post

Brief Reminder on Comments

I do accept comments on this blog; however, the frequency of spam comments is simply astounding. Despite the use of a spam blocker, I am still getting a couple obviously spam comments each month. The spammers are improving their techniques, using algorithms that produce comments that can look legitimate. I’ve just cleaned out the queue once again, and it’s possible that I’ve thrown out two or three actual valid comments. If you wish to place a comment in the future, either contact me through email (those of you who know me) or refer to something specific in the post that you are commenting upon. Otherwise you’ll most likely end up in the trash.

Report This Post

Sowell’s Basic Economics – Part 1

We recently re-joined Audible.com, and the first “new” book I purchased was Basic Economics, by Thomas Sowell.  I had read many of Sowell’s articles posted on Capitalism Magazine (www.capmag.com) and found them to be very clearly written and always in agreement with free market principles.  Sowell has published a couple dozen books, mostly on economics – which actually was a concern for me in selecting to read (alright, listen to) his work.  I often worry that a prolific author may either be poorly edited, or repetitive.  I had also worried that a book entitled “Basic Economics” may have little to add to my reasonable knowledge of the subject.

The book is ponderous – in print it is 640 pages; as an audio book it ran over 18 hours.  I had mistakenly thought it was shorter because of how Audible had structured the downloads, but I was pleasantly surprised when the book did not end after the first 12 hours – and for a book on economics, being listened to in a car, that’s saying alot!  Sowell accomplishes a quite thorough review of major elements of economics at an introductory level, while making the material accessible and just barely entertaining.  In every instance where I was beginning to grow impatient with the length of the discussion on a topic, he either brought the topic to an end, or threw in some intriguing real-world case study.   I have only a couple minor complaints about the structure of the book.  There are the odd “Overview” chapters, occuring at the end of each major section, and which appear to contain more than mere summaries, might be misleading, and seem awfully long.  There are a few instances of straight out repetition of the text, which seem to be accidental – the kind of thing any editor who read the entire book would find and correct.

Sowell’s overall theme of the book is that the principles of economics are really quite simple, but become confusing in the popular mind when mixed with emotion, psychology, and politics.   He clearly defines economics as “the efficient allocation of sparse resources which have alternative uses” – and if you haven’t memorized this after he repeats it at least 50 times throughout the text, then you haven’t read the book.  He does an excellent job of boiling each element of economics down to fundamental principles – supply and demand as the fundamental of the value to be exchanged for an item, the difference between value and price as determined by the money supply, the nature of profit and loss and their effects on business, the fact that labor is just another commodity to be traded.  His coverage of banking and the financial system is a bit light, but accurate, and probably as deep as he can go without causing confusion in his target reader.

The most interesting sections for my advancement in understanding were in his treatment of risks and insurance, and his discussion of international trade.  He clearly describes the difference between an insurance policy – run by a profitable business – and the so-called government insurance programs, which he rightly identifies as merely a form of forced redistribution of wealth from the younger to the older generations.   In an insurance company, the study of risk is paramount, and premiums can be computed scientifically, based on the statistics of claims of various sorts for the various classes of clients.  In the government programs, where the insurance is an “entitlement”, risk is irrelevant, premiums are independent of class (other than being assigned as a percentage of income), and the funds collected as “premiums” are intentionally confused with general tax collection funds and spent as the current government sees fit.

In the international trade section, Sowell provides outstanding descriptions of how the fallacy of the “zero-sum game”, wherein any wealth transfered between countries is seen as a loss for debtor and a gain for the creditor, can be easily refuted, by noting that wealth is constantly being created through investments.  The conclusion is that with very rare exception any trade occuring between countries, regardless of the balance of exports and imports (in goods or funds) is greatly beneficial to both countries involved.

Equally as strong as his general themes are his selections of examples.  In explaining the economics of big business, he provides a lengthy description of the history, and change in market positions, of such companies as Sears and Roebuck, Montgomery Ward, JC Penny, McDonalds, White Castle, A&P, and Walmart.  These are fascinating histories in and of themselves, and a separate book just discussing these and similar histories would make extraordinarily interesting reading.

Report This Post