Richard:

I should perhaps explain that Rob HaganRusi:

- included "boxes" in Scheme (fairly common in Scheme, actually) so you could pass "variables" (= ML refs) as parameters
- provided a really good single-stepping debugger so students could watch the execution of their program in some detail (I'm sure the DEC-10 Prolog debugger helped me a lot when learning Prolog)

Using Scheme made itpossiblefor him to do these things, of course.

I don't know what you mean by "unsuccessfully tried to have procedures as parameters included in ALGOL 60 in a straightforward manner". Algol 60hadprocedure parameters. They were arguablytoostraightforward, as you could not specify the parameter types, only the result type. This was not not straightforward forimplementors, but it was certainly straightforward forusers, and Niklaus Wirth made exactly the same mistake in the first version of Pascal. Heck, even Fortran let you pass subprogram names as parameters; the new thing in Algol 60 was that you could havenestedprocedures and passthoseas parameters.

If you want to connect early programming to functional programming, you can hardly do better than Bakker's 1967 PhD thesis "Formal Definition of Programming Languages: with an Application to the Definition of ALGOL 60." In that he offered a translation of Algol to the lambda calculus. Before that, there was

A Correspondence between ALGOL 60 and Church's Lambda-Notation: Part I

P. J. Landin, CACM 8.2, February 1965

A Correspondence between ALGOL 60 and Church's Lambda-Notation: Part II

P. J. Landin, CACM 8.3, March 1965

That paper talks about a model language "Applicative Expressions" with an example

let a = A

and b = B

and f(x, y) = F

let rec g(x) = G

and h(y, z) = H

let k(u, v)(x) = K

X

alongside its lambda-calculus equivalent. Look familiar?

"Where would CS have been for the last 50 years"? Well, the rise of functional programming has something to do with the fact that Algol 60didhave procedure parameters and that you needed something like FP to describe it. So the answer is "just like it was."

As for upper case,

The previous Model 33 and Model 35 were upper case only.

- ASCII got lower case only in 1967.
- The Model 37 Teletype was introduced in 1969.

It wasn't that thelanguageswanted upper case. Algol 60 was alwayspublishedin lower or mixed case and lower case was supported by those manufacturers whohadlower case. TheI/O equipmentwas mostly upper-case only. Here's typical published Algol from 1965.

procedure inverse permutation (P) of natural numbers up to: (n);

value n; integer n; integer array P;

comment Given a permutation P(i) of the numbers i = 1(1)n,

the inverse permutation is computed in situ. The process

is based on the lemma that any permutation can be written

as a product of mutually exclusive cycles. Procedure

inverse permutation has been tested for several

permutations including n = 1;

begin integer i, j, k, first;

for i := 1 step 1 until n do P[i] := -P[i];

comment now P[i] contains a negative number if original

and a positive number if inverse;

first := 1.

cycle:k := first. i := -P[k];

next: j := -P[i]; P[i] := k;

if i = first then go to endcycle;

k := i. i := j; go to next;

endcycle:

if first = n then go to finish;

first := first + 1;

if P[first] < 0 then go to cycle else go to endcycle;

finish:

end inverse permutation;

For that matter, section 2.1 of the Revised Report on Algol 60

says "<letter> ::= a | b | ... | y | z | A | B | ... | Y | Z"

where the abbreviation is mine, and then says

"This alphabet may be arbitrarily restricted, or extended with any other distinctive character (i.e. character not coinciding with any digit, logical value, or delimiter)." I've even seen, but was unable to read, an Algol program using Chinese characters.

Heck, the report contains this example:

procedure euler(fct, sum, eps, tim);

value eps, tim;

real procedure fct;

real sum, eps;

integer tim;

comment

euler computes the sum of fct(i) for i from zero up to

infinity by means of a suitably refined euler transformation.

The summation is stopped as soon as tim times in succession

the absolute value of the terms of the transformed series are

found to be less than eps. Hence, one should provide a function

fct with one integer argument, an upper bound eps, and an

integer tim. The output is the sum sum. euler is particularly

efficient in the case of a slowly convergent or divergent

alternating series;

begin integer i, k, n, t; array m[0:15]; real mn, mp, ds;

i := n; t := 0; m[0] := fct(0); sum := m[0]/2;

nextterm:

i := i+1; mn := fct(i);

for k:=0 step 1 until n do begin

mp := (mn+m[k])/2; m[k] := mn; mn := mp

end means;

if abs(mn) < abs(m[n]) and n < 15 then begin

ds := mn/2; n := n+1; m[n] := mn

end accept else begin

ds := mn;

end;

sum := sum+ds;

if abs(ds) < eps then t := t+1 else t := 0;

if t < tim then goto nextterm

end euler;

Notice the function parameter?

The parameters of a procedure were not specified at compile time. In the same way, the dimensionality of an array parameter was not specified. In both cases, (a) these were checked at run time by e.g., the Dutch compilers, and (b) they were fixed quickly in successors such as Burroughs Algol, Algol W, and Algol 68.

For example, in Burroughs Algol you would write

procedure Euler(fct, sum, eps, tim);

value eps, tim;

real procedure fct(x); value x; real x; formal;

real sum, eps;

integer tim;

if you had a Model 37 teletype... (or, as we did, an

Olivetti printing terminal, which had lower case).

Scott and Strachey may have started in 1967, but they did not start the practice of using functions to denote the meaning of imperative languages. I'm not sure if that honour belongs to Landin (1965) or Bakker (PhD published in 1967, but he hadstartedwell before that).

Reverting to capital letters. UNIX did NOT introduce lower case letters. C did NOT introduce lower case letters. The Algol 60 report was quite explicit aboutwantinglower case letters. Indeed, UNIX V6 was still perfectly happy working with upper-case-only terminals in 1979 when I met it. (For that matter, I have Solaris 10 on an old SunBlade-100, and it still suppose -iuclc and -olcuc in the terminal driver.) There's areasonwhy "native" C style does not use baStudlyCaps and that is that initially you could rely on having lower case or upper case converted to lower case but you could not rely on having both cases.

Pattern matching goes back to COMIT. Recursion was in Algol 60.

The FIRST version of Emacs was NOT written in Lisp; it was written in TECO.

I didn't know that Meertens had worked with SETL.

One thing you are missing:

1968: Statically typed lexically scoped anonymous functions appear in Algol 68.

I'm not sure what year it came out. Maybe about 1973. There was a French introductory programming textbook called, if memory serves me correctly (which in this case I think it doesn't) "Mes premiers pas en programmation" which used Algol 68. It was more than half way through the book before they introduced the assignment statement... Honestly, ML strikes me as Algol 68 minus formatted I/O plus modules.

real x = 1.2

defines x to be a realvalue, not a realvariable.

Richard A. O'Keefe

Hey Richard!

Thats a lot of valuable stuff

Is it ok if I post that as comment coming from Richard A O'Keefe in the comments section?

Specifically about the Algol FP comment, I am trying to track the comment which I believe was in Wexelblatt's History...

Dont have it right now

Best

Rusi

By the way, I was reading through an Elliott 503 Algol manual last night. There's no date on it, but the machine was introduced in 1963. The input device was a Friden Flexowriter and Friden were bought by Singer in 1965, so it must be before 1965. The manual makes it quite clear that they supported the full Algol 60 character set except for a few characters that could be obtained by overstriking (the "|" and "_" characters were nonspacing) and a few punctuation marks that had to be spelled out (notably and, or, not). Both letter cases were available. The 503 and its predecessor the 803 were very popular for their day, practically every university in Britain would have had one. Lower case.Rusi:

Hi RichardRichard:

Interesting (counter)examples!

However I guess the whole question of case is by any standard rather unrelated to programming and FP in particular.

Still let me say why I talked of it at all (Maybe it should just go??)

There is formal math -- Russel&Whitehead, Peano axioms, Frege etc

And there is informal (normal) math -- what most professional mathematicians practice.

If the formal mathematical semantic properties of programs dont match informal notations, it is not likely to be pragmatically useful to people.

eg

For everyone (starting myself!) Lisp's prefix notation is simple compared to parenthesis, BODMAS etc etc

However an expression in Lisp is a pain to decipher.

So in general while a 'universal notation' can allow one to code up anything (including another universal notation) the coding depth may be so deep as to be humanly useless.

Case is a rather minor example of this

ASCII→Unicode is a much more central example

The latter is really my axe to grind for which I am using the former to make the point

Rusi

On 13/06/2015, at 2:20 pm, Rusi Modywrote:

However I guess the whole question of case is by any standard rather unrelated to programming and FP in particular.I'm not sure. Dijkstra wrote often about the importance of notation.

Still let me say why I talked of it at all (Maybe it should just go??)Not more so than any other language, surely?

There is formal math -- Russel&Whitehead, Peano axioms, Frege etc And there is informal (normal) math -- what most professional mathematicians practice. If the formal mathematical semantic properties of programs dont match informal notations, it is not likely to be pragmatically useful to people. eg For everyone (starting myself!) Lisp's prefix notation is simple compared to parenthesis, BODMAS etc etc However an expression in Lisp is a pain to decipher.

I once did a head-to-head comparison looking at some algorithms written in both C and Lisp. If you countedallthe brackets, () [] {}, C actually had more of them than Lisp.

I do not say that this applies to every algorithm, just the numerical algorithms I happened to check.

Admittedly, C can get by with fewer parentheses than it normally uses: there is absolutely no need to wrap every comparison in parentheses as too many programmers do.

Did you mean to use 'parenthesis' as if it were plural?

So in general while a 'universal notation' can allow one to code up anything (including another universal notation) the coding depth may be so deep as to be humanly useless.If you learn Lisp before you learn C (and indeed, I

wrotetwo Lisp interpreters before I ever saw C), it is C that looks "deeply coded" compared with Lisp. As for JavaScript, words fail me.

For that matter, I still often use APL notation when thinking onto paper.

Case is a rather minor example of thisWhile Edinburgh still had an upper-case-only terminal or two

ASCII→Unicode is a much more central example

The latter is really my axe to grind for which I am using the former to make the point

when I went there in 1979, theyalsohad some terminals with

∀ ∃ ∈ and some other mathematical characters. For that matter,

the 6-bit character set Burroughs used in the 1960s (BCL)

included ≤ and ≥ . There was the Madcap series of programming languages.

"Aspects of Language Design for Combinatorial Computing",

by Mark B. Wells, goes back to 1963.

MADCAP is a programming language of the FORTRAN-ALGOL type, although, due primarily to the use of a “scripting” Flexowriter for preparing input, it is notationally more advanced. This capacity to display formulae, the ability to imply multiplication by the juxtaposition of factors, and the use of indentation to indicate the scope of a conditional expression (= conditional) or iteration distinguish MADCAP as an exceptionally readable language. The character set includes the alphabet in upper and lower case, the decimal digits, and twenty-four common punctuation marks and mathematical symbols (...).See CACM volume 4 issue 1 (January 1961) for an outline of an

The characters may be typed in red or black.

earlier version. Not just mathematical symbols but two-dimensional

notation as well...

MADCAP was a very interesting language family which apparently had astonishingly little influence. Someone really needs to write it up in detail for computing history.

MADCAP has some things in common with SETL -- indeed, there's a paper comparing them, which I have not seen -- and the original SETL used an enriched character set too.

Basically, any language that used an enriched character set was doomed. APL clings to existence by its fingernails; SETL long ago abandoned non-ASCII characters and even then only just survives. MADCAP? Ah, where are the snows of yesteryear?

Mark Wells was very clear back in 1961 that he wanted a notation for computer programs that looked like the notation you would find in a textbook. I think he would (would have? I don't even know if he's still alive) agree strongly with you that a good notation, including the use of appropriate characters well out of the range of old 8-bit character sets, is important for programming.

The only real point I am making here is that back in the early 1960s people were very clear that they WANTED lower case (and an assortment of mathematical symbols) and that some people were fortunate enough to HAVE them; that while UNIX and C are admirable in their way, theylongpost-date that realisation.

Or to put it another way: our predecessors were very smart people and there is little new under thesoftwaresun. (Modernhardwareis amazing and has enabled the widespread adoption of rather old software ideas. See "The Mother of All Demos.")

The problem with 2-dimensional notation is that people tended to demonstrate it with things like the formula for the roots of a quadratic, and thanks to the vagaries of floating-point arithmetic, youdon'twant to program it that way. A good-way-to-write-a-formula-for-symbolic-reasoning and a-good-way-to-organise-a-calculation-to-give-useful-results are often very different, and 2-dimensional notation really does not help with the hard parts, and does not extend terribly well to the hard parts, at least in any obvious way.

What matters for floating-point calculations is our present inability to express certain thingsat allin our notations, like the way Java hides away most of the IEEE facilities, or the way even C makes it hard to do "presubstitution".

PS I guess its ok if I paste this conversation in the comments of the blog?Absolutely.

Flexowriters, and with them lower case, were in use long before 1963. MIT's Whirlwind II had them in 1952. Assembler opcodes were lower case, except for some upper-case pseudo-ops. It came as a shock to learn that Whirlwind's more powerful replacement, an IBM 704, dealt exclusively with upper case, as if the 704 were a Roman artifact. Upon the advent of CTSS (1961) lower case again became customary. The manuals for pre-existing languages still expressed syntax in upper case, but users of case-sensitive terminals typically used lower case. Language implementations respected case in string literals and run-time data.

ReplyDeleteAn amusing sidelight is that some editions of the Chicago Manual of Style deprecated upper case text in general but stipulated that computer programs should be upper case.

Heh! The manual of style part is funny! What goes round comes round... Has a more exact term that eludes me at the moment.

ReplyDelete[And thanks Doug for dropping by!]