Search This Blog

Wednesday, July 27, 2016

Mechanism Romanticism and the Origins of the Computer

Guest Article: Reposted with thanks


The story of

The origins of the electronic computer…

as it is most frequently told, is an engaging tale of intellectual turbulence in the early decades of the twentieth century. The computer grew out of dramatic upheaval in the fields of mathematics and logic, not unlike what was happening at the same time in physics, politics, and the arts.  In this paper, we shall examine the origins of the computer from the perspectives of two competing world views, which we will call “Mechanism” and “Romanticism”, after Dahlbom and Mathiassen (1993). Although the computer is considered the crowning achievement of the former of these, we shall see that, ironically, it was inspired by a discovery that represented, in a sense, a major setback for the Mechanistic mode of thinking.


The story of the computer usually begins with the great German mathematician David Hilbert, who at the turn of the century, challenged his field with twenty-three unsolved problems, which he believed could be solved, or proved unsolvable, if they could be expressed unambiguously in the language of mathematics.  Principia Mathematica, between 1910 and 1913, which in its attempt to place mathematics squarely in the domain of logic, represented the first new system of logic since Aristotle (Pagels, 1989; Dyson, 1997).
Hilbert’s belief that all mathematical truths should be attainable by fundamental axioms was shared by the British mathematician, philosopher, and logician, Bertrand Russell, and his partner Alfred North Whitehead. Acting upon that conviction, these two men created their great opus

In 1928, Hilbert posed three questions designed to answer explicitly whether it was possible to define a closed mathematical system. He asked whether a finite set of rules for such a system could be defined so as (Dyson, 1997):
  1. To be proved consistent
  2. To be proved complete
  3. To allow for a decision procedure, for any statement in the language, that could either prove or refute the statement, but not both. 
Then, in 1931, the ambitions of Hilbert, Russell, Whitehead, and their followers, for a mathematics perfectly rooted in logical formalism, were abruptly dashed by the Austrian logician Kurt Gödel’s stunning proof that no system could meet both of Hilbert’s first two criteria — that is, any formal logic system (containing at least the axioms of arithmetic) must either be inconsistent or incomplete (Pagels, 1989). Five years later, with the mathematics community still reeling from Gödel’s influential theorem, a twenty-four-year-old Englishman at Cambridge, Alan Turing, came up with the innovative notion of a “Universal Computing Machine”, as part of a complicated effort to establish whether or not it was possible, in an axiomatic system, to distinguish arbitrary propositions from provable ones (Strathern, 1997). Turing’s hypothetical machine, now known as the “Turing Machine”, is today generally recognized as the conceptual forerunner of the modern digital computer.

Before we begin our examination, let us be clear about what we mean by Mechanism and Romanticism. Mechanism, as it is described by Dahlbom and Mathiassen, is a broad term for formal and rational modes of thinking that ebbed and flowed throughout modern history, burgeoning with early modern thinkers such as Descartes, Leibniz, and Newton, flowering during the French Enlightenment, and experiencing a revival with the positivism of Comte and the highly logical philosophies of Russell and Whitehead. In the arts, it was perhaps best exemplified by the classically-inspired and highly refined paintings of the technical prodigy Jacques-Louis David (Canaday, 1981). Mechanistic thought emphasizes objective detachment, order, control, and stability, sometimes at the expense of emotion and spontaneity. It tends to discredit the purely intuitive, favoring what is formally provable and demonstrable. The terms “classicism” (Canaday, 1981) and “neo-classicism” (Wilson, 1998) are often used synonymously to the way Dahlbom and Mathiassen use “Mechanism”.

Beginning as a reaction to Mechanism, and existing in perpetual tension with it since about the time of Napolean I, is Romanticism, a world view that places its highest values on human passion, intuitive brilliance, and the beauty of Nature. The Romantic view holds that it is better to experience than to analyze, that cold calculation deadens the human spirit, and that intuition, not computation, is the essence of thinking (Dahlbom and Matthiassen, 1993). Famous Romantics include Napolean, Rousseau, Delacroix, Goethe, Thoreau, Liszt, Wagner, and Nietzsche. After suffering a minor decline at the beginning of the 20th century, Romanticism was revived later on in the form of the deconstructionism of Jacques Derrida and Paul de Man (Wilson, 1998).

The issue of the machine has been at the heart of the conflict between Mechanism and Romanticism from the beginning. The Mechanists, true to their name, have always been fascinated with and drawn to machines. For example, Leibniz spent much of his life dreaming of automatic calculating machines, and in fact built an extraordinarily complex one that could perform multiplication and subtraction, as well as addition (thus bettering Pascal, who had formerly built one that performed only addition) (Strathern, 1997). Indeed Newton liked to conceive of the world as a machine, built by God and whirring and humming according to the sacred laws of physics (Wilson, 1998). The Romantics, on the other hand, have tended to be suspicious of machines, seeing them as a threat to humanity. Thoreau’s self-imposed exile at Walden was largely a reaction against the rapid mechanization of the industrializing world in which he lived. Rousseau became famous for advocating a retreat from machinery, and other trappings of civilization, and a return to nature and to our natural state as “noble savages” (Wilson, 1998).

The computer, considered by many the ultimate machine, continues to be a subject of debate between adherents to today’s descendants of Mechanism and Romanticism. A machine which not only can perform human tasks, but which threatens to tread on the hallowed ground of human thought, it elicits the greatest excitement from modern-day Mechanists in the fields of Artificial Intelligence and Artificial Life, and draws the most alarmed criticism from modern Romantics such as Searle and Dreyfus (Rose, 1992).

David Hilbert’s program to place modern mathematics on a foundation of formal logic was in a sense, a Mechanist’s dream. While there may be no such thing as a completely Romantic mathematician, Hilbert carried the notion of rigor farther even than most Mechanists will go. A bold and stern taskmaster for his colleagues, Hilbert not only challenged them in 1900, when he was less than 40 years old, with what he believed to be the most important unsolved problems of their time, but proposed that progress in the field during the next century be judged according to these problems (as it turned out, only about half of the twenty-three had been solved by 1950). Hilbert had been slow to start for a mathematician, not demonstrating his abilities until he was in his twenties, but once he got going he was a formidable force. He made major contributions in the areas of algebraic numbers and geometry, and to the theory of invariants. He sometimes wiped out entire areas of investigation, his work was so exhaustive. By the turn of the century, Hilbert had established himself as the leading mathematician in Germany, and one of the greatest mathematical minds in history. He was an important leader in the movement toward abstraction that has been so prominent in the intellectual achievements of the 20th century (Pagels, 1989).

Long interested in the idea that mathematics is a deductive system in which all statements derive from fundamental axioms, Hilbert turned in earnest in the 1920s to the problem of establishing the consistency and completeness of these axioms. His idea was to start with more complicated areas of the field, to prove them in terms of simpler ones, and to keep reducing until the consistency of all areas was established (in this vein, he proved the consistency of Euclidean geometry by assuming the consistency of number theory). He believed that the world could be assured of the validity of mathematical theorems only when it was confident of the consistency of an underlying system of axioms. It was this conviction that led him to pose his three questions of 1928, when he addressed the International Congress of Mathematicians. Heinz Pagels writes, “With the task of mathematics presented with such clarity and purpose, it seemed as if mathematics had been forever inoculated against infection by the twin diseases of paradox and inconsistency” (Pagels, 1989). Even in this field of pure rationality, expressed in the pristine terms of mathematical symbolism, Hilbert pushed for further discipline and formalism. There would be no sloppy reliance on intuition in mathematics, if he had his way.

Bertrand Russell was a firm adherent to Hilbert’s school of thought, and along with his collaborator Whitehead, independently pursued the goal of a mathematics rooted in formal logic with one of the most intellectually ambitious undertakings of modern times, the three-volume Principia Mathematica. Russell, who is considered by many to have been the greatest logician since Aristotle (Thorne and Collocott, 1984), believed mathematics to be subsumed by the more general field of logic, and thus, symbolic logic to be the proper language in which to describe it. The Principia was a manifestation of this belief, describing the theorems of natural and real numbers and of analytic geometry in terms of the laws of logic.

The Principia was influential and widely admired, but fell short of its goal. Its main failing was with the notion of a hierarchy of “types” of sets, which Russell had introduced in an attempt to avoid certain paradoxes that arise from self-referring sets — sets that exhibit what Hofstadter calls “Strange Loops”, a major theme of his popular-science classic, Gödel, Escher, Bach (1979). The theory of types was at best slightly contrived, and at worst, in the words of Pagels, “unbelievably cumbersome”. Hofstadter demonstrates some absurd implications of the theory by applying it analogously to ordinary English sentences that self-refer to their writer, or to the document in which they appear.

This quintessentially Mechanist undertaking, the Principia, drew strong criticism from more Romantically-inclined mathematicians of the “Intuitionist” school, led by L. E. J. Brouwer of Holland. More than in logic, the Intuitionists held, mathematics was grounded in our mental capacity to imagine and to discern the properties of mathematical entities. This idea had its roots in the philosophy of Immanuel Kant (Pagels, 1989), who helped to pave the way for the Romantic revolution of the late 18th and early 19th centuries. The Intuitionists were not only critical of the Principia, but opposed the entire “Formalist” program of Hilbert and his followers (Pagels, 1989). Even in the supremely rational field of mathematics, it seems there was tension between Mechanists and Romantics.
When Kurt Gödel dealt served a death warrant to the Formalist program with his Incompleteness Theorem, in 1931, he chose the great Principia as the medium for the illustration of his fatal proof (In fact, the title of his famous paper is, translated from the German, “On Formally Undecidable Propositions in Principia Mathematica and Related Systems I.”) (Hofstadter, 1979). Gödel was only 25 years old at the time, and still living in his native Austria (later he would come to the United States, where he would become a good friend of Einstein).

The proof is highly formalized and reputedly difficult to follow. Essentially, Gödel employed a clever trick to express in mathematical terms a statement akin to that of the famous Epimenides, a Cretan who said “All Cretans are liars” (Hofstadter, 1979). Gödel’s trick was to invent a code — so-called Gödel numbering — that would allow mathematical statements to refer to themselves (more “Strange Loops”, in Hofstadter’s phrase), and to become what Gödel called “Metamathematical assertions” (Dyson, 1997). Using this code, Gödel constructed a statement that translated roughly to the sentence, “This statement is not provable”. At the same time, however, the statement was expressed so as to be obviously true (Pagels notes that it is the subtle difference between truth and provability that allowed Gödel himself narrowly to escape the jaws of Epimenides’s paradox). Because Gödel constructed the statement from the axioms of the Principia, then either that system was inconsistent, or it was incomplete — i.e., there existed true statements that could only be proved by going outside its boundaries. What was most extraordinary about the proof, however, was that he established it in such profoundly general terms that it applied not only to the Principia, but to all formal systems encompassing elementary arithmetic (note the phrase “and Related Systems” in the title of his paper). Without this degree of generality, it would only have identified a flaw in Russell and Whitehead’s tome (presumably someone would have generalized it eventually, but how much less dramatic that would have been!).

Gödel’s proof caused a rumble that could be felt throughout the world of mathematics and logic. To some, it threatened to end not just Formalism, but mathematics as a whole. In Strathern’s words, “Mathematics was illogical! (And, horror of horrors, so too was logic!)”. What did this imply for those who believed, as had Galileo, that mathematics was the universal language, not only of science, but of Nature? Gödel’s belief that there was no cause for panic, however, eventually prevailed. According to Dyson:
The mathematical territory that Gödel expropriated from the stronghold of consistency and proof was distributed to the surrounding mathematical wilderness in the form of intuition and truth.
In this way, Gödel’s proof represented a victory for the Intuitionists, and by extension, for the more Romantic world view. Its implication was that the mechanical combination and transformation of a small set of logical axioms was not sufficient to establish higher-level theorems in mathematics. Rather, flashes of human brilliance were required, to establish connections “outside the system”, in a way that was not reducible to strictly logical rules. The mathematician had to have something of the poet in him, as well as the logician. In fact, this view of mathematics has survived to the present. Pagels writes:
In a sense the whole concern with logic, axioms, and the foundations of mathematics initiated by Frege [another of the Formalists], Russell, and Gödel can now be seen as an immense detour — perhaps a necessary one — around the actual conduct of mathematics … Mathematics, as practised, is less of a logical discipline and more of an intuitive science.
Soon after this characteristically modern upheaval in mathematics, an archetypal modernist made his entrance on the stage. Alan Turing was the sort of man Joyce might have written about, had he been interested in science. He was a deep and original thinker, a stubborn individualist, and a social misfit. A homosexual who was not discrete enough in a homophobic society, he would die at the young age of 41 by his own hand, after having been given hormone treatments to “cure” him of his condition (Strathern, 1997). In the wake of Gödel’s theorem, Turing, as a young fellow at King’s College, Cambridge, became interested in the famous Entscheidungsproblem, or decidability problem, of whether provable statements within a logical system could be distinguished from disprovable ones by a “mechanical procedure”.

Naively unfamiliar with the work of such experts on the problem as Alonzo Church and Emil Post (“Let us praise the uncluttered mind”, said his colleague Robin Gandy — Dyson, 1997), Turing approached it in a typically original way. Drawing on his amateur interest in machinery, he addressed the question of what can be determined by a “mechanical procedure”, or “what is computable”, by imagining a “Universal Computing Machine” — the now-familiar “Turing machine” of the finite states (or “m-configurations”, as he called them) and the finite but unbounded length of tape, which is divided into squares, onto which can be written arbitrary symbols. He showed that this machine was so general that it was ultimately equivalent to the most complex computer (Dyson, 1997).

Turing published his famous paper, “On Computable Numbers, with an application to the Entscheidungproblem” in 1937, and succeeded in demonstrating that the Entscheidungproblem was unsolvable. Almost as a by-product, he had invented, at least conceptually, the modern computer. Not only did he provide a definition of computability that would survive at least the rest of the century, but he had introduced two fundamental assumptions that still underlie the computers we know today: discreteness of time, and discreteness of “state of mind”, or memory (Dyson, 1997). It was only a matter of time before people would begin to build computers that approximated Turing machines. In fact, von Neumann recognized the potential for practical applications of Turing’s ideas soon after “On Computable Numbers” had been published, when he and Turing were both at the Princeton Institute for Advanced Study (by this time, Gödel was at the Institute as well, although he seems to have been mostly unfamiliar with Turing) (Strathern, 1997).

There is a subtle irony here, in that Gödel’s theorem, which reinstated intuition as a critical aspect of mathematical thought, led directly to the creation of the greatest machines ever built — machines that before long would threaten human intelligence with an eerie Mechanistic intelligence of their own. It was only a little more than a decade after his proposal of the theoretical Universal Computing Machine that Alan Turing published his influential article “Computing Machinery and Intelligence” (the article in which he proposed the test we now call the Turing Test, the passing of which is the Holy Grail of Artificial Intelligence). By this time, Turing, von Neumann, and others had built and programmed computers, and had begun to recognize the magnitude of the revolution they had started. Turing’s article begins,
I propose to consider the question ‘Can machines think?’
He goes on to explore the meaning of this loaded question, and eventually dismisses it as meaningless. He continues, however, with the provocative statement, “Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much hat one will be able to speak of machines thinking without expecting to be contradicted”, and then procedes to enumerate and to counter nine possible objections to his bold claim (Hofstadter, 1979). Although Turing’s views on Artificial Intelligence would never be straightforward — for example, he anticipated Artificial Life and genetic algorithms by postulating that true intelligence in machines would have to evolve, rather than to be designed (Dyson, 1997) — he placed himself, with this landmark article, clearly in the Mechanistic camp that believes Artificial Intelligence to be possible.

About this…

Guest Post

I came across this article (above) I think in the early 2000s.

The original had this on the title page

Adam Siepel
Mid-term paper for CS 451
October 19, 1999

Then as now I found it remarkable that an undergraduate paper was so insightful. In fact the thoughts expressed at CS history 0  probably got their first impetus from this article.

Reposted here with thanks to Dr. Adam Siepel 

References

  1. Canaday, John, Mainstreams of Modern Art, Second Edition, Holt, Rinehart, and Winston, Inc., 1981.
  2. Dahlbom, Bo and Lars Mathiassen, Computers in Context: The Philosophy and Practice of Systems Design, Blackwell Publishers, 1993.
  3. Dyson, George B., Darwin Among the Machines: The Evolution of Global Intelligence, Perseus Books, 1997.
  4. Hofstadter, Douglas R., Gödel, Escher, Bach: An Eternal Golden Braid, Vintage Books, 1979.
  5. Pagels, Heinz R., The Dreams of Reason: The Computer and the Rise of the Sciences of Complexity, Bantam Books, 1989.
  6. Rose, Steven, The Making of Memory: From Molecules to Mind, Anchor Books, Doubleday, 1992.
  7. Strathern, Paul, The Big Idea: Turing and the Computer, Anchor Books, Doubleday, 1997.
  8. Thorne, J. O., and T. C. Collocott, eds., Chambers Biographical Dictionary, Cambridge University Press, 1984.
  9. Wilson, Edward O., Consilience: The Unity of Knowledge, Vintage Books, 1998.

No comments:

Post a Comment