Universal Pseudocode

Universal Pseudocode

Philologically conceived, the history of programming languages can be situated within the history of the search for a universal language

For various reasons, I did not make it to the American Comparative Literature Association 2014 Annual Meeting last weekend to present this paper, as planned, in a very interesting seminar on the topic “Language Capitals and Language Capital.”

I

Philologically conceived, the history of programming languages can be situated within the history of the search for a universal language, as chronicled by Umberto Eco in The Search for the Perfect Language in European Culture (1995). That history comprises three overlapping phases:

  1. The construction of artificial primary, auxiliary, and cryptographic languages, from the late seventeenth through the late nineteenth centuries in Europe and Asia;
  2. World-wide national-language standardization, alphabetization, and writing reform, from the mid-nineteenth century into the twentieth;
  3. The both global and total simulation of writing (and later, speech) in digital code, from 1945 to the present.

While Eco’s book is typical in its entirely glancing and truncated treatment of the computer programming languages of the twentieth century, one might lament equally the self-imposed constraints on existing histories of programming languages by Jean E. Sammet (1969), Donald E. Knuth and Luis Trabb Pardo (1976), Richard L. Wexelblat (1981), and Thomas J. Bergin and Richard G. Gibson (1996). Irrespective of — indeed, in some ways perhaps precisely owing to their technical sophistication — such work might be said to have documented the linguistic history of computing without reflecting very extensively on its broader global historical, cultural, and political contexts as a history of human uses of language.

Knuth and Pardo have written of the “pre-Babel” days preceding the release of the first version of the Fortran language and the “explosive growth in language development” that followed from 1957 on (Knuth and Pardo 1976, 2; Knuth and Pardo 1976, 95). The gentle irony of this remark reminds us that the history of programming languages up to 1957 was anything but peaceably unilingual (or monological) in character: as early as 1954, Saul Gorn noted that despite aggressive university investment in computer design, construction, and use for data processing, universities were “reluctant to train programmers, feeling that there are too many specialized codes.” Perhaps, he went on to suggest, “this reluctance will vanish if we can provide a code more or less independent of the machine” (Gorn 1954, 75). The specificity and thus the incompatibility including lack of interoperability of machine code instruction sets provided by hardware processor manufacturers was perceived as a problem almost from the start, and a rich discourse developed borrowing concepts of language and translation from an everyday linguistic lexicon.

The first and most important requirement of any universal code, or single “higher-level” programming language to be used to program different manufacturers’ computers, was identified as translatability (Gorn 1954, 75), by which was meant accompanied (at least in program specifications or other formal descriptions) by routines for “translating” any such ostensibly hardware-independent universal code into a hardware-dependent machine instruction set. The metaphor of translation — it is a metaphor, not an accurate or even a good descriptor of a computational process — appeared also in reflections on the function of a universal code in relation to the various forms of both linguistic and mathematical notation in which mathematicians and engineers formulated the problems they hoped to program computers to help them solve. Here, the universal code might be imagined as a third or intermediary language, again by resort to often elaborate analogies to the social domain of human language:

The crux of what has been done in the past has been the introduction of a third language into programming — the first two being the language of the machine and the language in which the problem is formulated. […] Until very recently programmers have been like an American who can speak German who finds himself with a Frenchman who can speak Russian. In order to communicate with the Frenchman, the American must find a German who can speak Russian. It would be simpler if the American would learn to speak French or the Frenchman English but, of course, the American would rather have the Frenchman learn to speak English than learn French himself. Similarly, programmers would like computers and data-processing machines to understand the language in which their problems are formulated. (McGee 1957, 57)

Sending a problem into a computer nowadays is like sending an expedition to Africa to trade with the natives. It has to be complete with the missionaries to translate and possibly convert the natives. If the missionaries speak only French and the native tongue then we must speak French but if the missionaries speak English too then everything is all right. (Wegstein 1956, 6)

The metaphor of translation also referred to the higher-level abstraction of a mnemonic assembly code, itself abstracting a hardware-dependent machine code, by a higher-level universal code (Gorn 1954, 81) — or to the rewriting, in “pseudocode,” of the everyday natural (human) language in which a programmer might initially sketch out a problem (Gill 1954, 98). It was understood as more efficient, and therefore less costly, to have a programmer provide instructions in the “foreign language” of such a pseudocode “and have the machine translate it into its own language” than for the programmer to work directly in machine code (Backus and Herrick 1954, 106) — although under some circumstances, a form of code-switching might be encouraged, in which the pseudocode, closer to natural language, might nonetheless permit the integration of instructions in machine code in cases where the greater speed and efficiency of the latter were critical (Backus and Herrick 1954, 106). What we today call “programming” was at the time called “automatic coding,” a partial automation of the human labor of creating, storing, and retrieving the instructions that made it possible for the hardware computer itself to operate as “automatic” — that is, to take the place of the human computer (usually a woman) who had performed that labor before. In such cases, a constellation of linguistic analogies might grow quite elaborate: Brown and Carr III (1954), for example, proposed the imagination of a “dual language system and a programmed translation between the two languages, using the computer to perform the translation” (Brown and Carr III 1954, 85).

Of the need to develop a “universal computer language,” Brown and Carr III (1954) observed that input methods, “limited to certain standard input characters, are not able to assimilate the more unusual symbols needed” by existing combinatorial and logical languages that might otherwise be adapted to the task (Brown and Carr III 1954, 89). Joseph Wegstein noted that there were already many “automatic coding systems” in active use, observing that most of these systems were quite expensive, and that laboratories might continue to use an inferior system merely in order to protect its investments. In an effort to address this, he also noted, organizations such as USE (Univac Scientific Exchange), among others, had emerged to coordinate standards for automatic coding on particular machines. Beyond that level, Wegstein noted work in information theory and formal logic that he thought should enable “the black box to reach clear across,” with mathematical equations themselves serving directly as pseudocode. At this point, Wegstein remarked, “the long-sought universal code will also have been found. Universal code designates a pseudocode that is acceptable to more than one type of computer” (Wegstein 1956, 5).

“The pseudocodes themselves,” remarked Grace Murray Hopper, referring to what we today call programming languages, “form a whole field of study and research” (Hopper 1954, 2). Hopper defined “automatic coding” as the automation of the coder’s labor, a process that had two components: the first, “devising a method for expressing the information contained in a flow chart,” or “devising pseudocodes,” and the second, “preparing the routines and subroutines to process the pseudocode and produce computer coding and ultimately results” (Hopper 1954, 2). Hopper, too, imagined a universal pseudocode for which every computer installation would provide its own “interpreter or compiler” (Hopper 1954, 4). Remarking on the accumulation of code for existing computers and the proliferation of new computer designs using new sets of hardware instruction codes, she also called for “translators to make available ‘old’ coding to new computers,” to conserve programming time and labor. “Since this will be a mechanical operation,” she concluded, “we shall look at the computers to do this job for us.” More generally, she suggested, “just as compilers control generators to produce programs to process data, we shall soon be talking of systems containing computers controlling and directing computers” (Hopper 1954, 5).

II

Let me make this the occasion for a brief closing remark on one of the exigencies of the historical present. Recent years have seen the emergence and growth of initiatives to add instruction in software programming to primary and secondary educational curriculums in the United States and the United Kingdom, as well as to provide such instruction at no cost or low cost via Internet, outside the historical institutional boundaries of the educational sector. This is a complex and multifaceted development, whose dimensions I cannot explore fully here. But it is unquestionably one of the political and cultural effects of the 2007–2008 U.S. housing and banking crisis, which briefly offered Silicon Valley the opportunity to rebuild the reputation tarnished by the 1997–2000 dotcom bubble and collapse (now tarnished once again by their collaboration with the National Security Agency), sent institutional investors fleeing the housing market in search of profit in higher education reform, and encouraged both policymakers and U S. consumers to seek quick fixes to unemployment through the technical retraining of an unemployed and under-employed managerial middle class.

A 2013 report by Harvard Business Review entitled “America’s Incredible Shrinking Information Sector” throws cold water on such hopes and dreams, noting that far from having grown, “[t]he information industry […] shed more jobs in the first decade of the millennium than any other sector except manufacturing,” and that “[t]he culprit, ironically enough, is tech-driven innovation, which has produced dramatic gains in efficiency and widespread automation.” This suggests that the dreamed-of “code literacy” to be acquired by “learning to code” represents the acquisition of what is at present a semi-artisanal skill with little real future as a human labor skill, certainly no ticket to entry into a growth area of the U.S. or even of the global economy. This should not surprise us, if we understand that the entire history of computer programming and of programming languages is a history of recursive automation: that is, of the addition of successive layers of control in which so-called higher-level programming “languages” are translated into lower-level codes. To consider the abstraction of such higher-level programming languages, in their superimposition of the English language onto assembly codes and ultimately onto hardware instruction codes, the latter themselves abstractions of the binary notation in which hardware design takes symbolic form, is to complicate access to such “code literacy” considerably.

References

Backus, John W., and Harlan Herrick. 1954. “IBM 701 Speedcoding and Other Automatic-programming Systems.” In Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, edited by United States Navy Mathematical Computing Advisory Panel, 106–113. Washington, D.C.: U.S. Dept. of Commerce, Office of Technical Services.

Bergin, Thomas J., and Richard G. Gibson, ed. 1996. History of Programming Languages II. New York: ACM Press; Addison-Wesley.

Brown, J. H., and John W. Carr III. 1954. “Automatic Programming and Its Development on the MIDAC.” In Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, edited by U.S. Navy Mathematical Computing Advisory Panel, 84–97. Washington, D.C.: U.S. Dept. of Commerce, Office of Technical Services.

Eco, Umberto. 1995. The Search for the Perfect Language. Translated by James Fentress. Oxford, UK; Cambridge, MA, USA: Blackwell.

Gill, Stanley. 1954. “General Discussion at End of Thursday Afternoon Session.” In Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, edited by U.S. Navy Mathematical Computing Advisory Panel, 98. Washington, D.C.: U.S. Dept. of Commerce, Office of Technical Services.

Gorn, Saul. 1954. “Planning Universal Semi-automatic Coding.” In Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, edited by U.S. Navy Mathematical Computing Advisory Panel, 74–83. Washington, D.C.: U.S. Dept. of Commerce, Office of Technical Services.

Hopper, Grace Murray. 1954. “Automatic Programming — Definitions.” In Symposium on Automatic Programming for Digital Computers, Office of Naval Research, Department of the Navy, Washington, D.C., 13-14 May 1954, edited by U.S. Navy Mathematical Computing Advisory Panel, 1–5. Washington, D.C.: U.S. Dept. of Commerce, Office of Technical Services.

Knuth, Donald E., and Luis Trabb Pardo. 1976. “The Early Development of Programming Languages.” Stanford, CA. http://http.se.scene.org/pub/bitsavers.org/pdf/stanford/cs_techReports/STAN-CS-76-562_EarlyDevelPgmgLang_Aug76.pdf.

McGee, Russell C. 1957. “Omnicode — a Common Language Programming System.” In Automatic Coding: Proceedings of the Symposium on Automatic Coding, January 24-25, Franklin Institute, Philadelphia, 57–70. Journal of the Franklin Institute Monograph 3. Philadelphia, PA: The Franklin Institute.

Sammet, Jean E. 1969. Programming Languages: History and Fundamentals. Englewood Cliffs, N.J.: Prentice-Hall.

Wegstein, Joseph H. 1956. “Automatic Coding Principles.” In Symposium on Advanced Programming Methods for Digital Computers: Washington, D.C., June 28, 29, 1956, 3–6. Washington, D.C.: Office of Naval Research, Dept. of the Navy.

Wexelblat, Richard L., ed. 1981. History of Programming Languages. New York: Academic Press.


This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.