Search This Blog

Monday, September 9, 2013

The Poorest Computer Users are Programmers

In the old days programmers programmed computers. Period.

Nowadays when everything is a computer, and the traditional computer is about a decade and half behind the curve, describing a programmer as someone who programs computers is narrow and inaccurate. Instead we should think of programmers as working at effecting and improving the human-X interface, where X may be 'computer'. But it could also be IT, or technology or the network and through that last, interaction with other humans.

Now the classic 'nerdy' programmer was (by stereotype) always poor at 'soft' questions like that:  Interaction? Synergy?! What's all that manager-PR talk to do with programming?

And so today…

Programmers are inept as users of computers

Some examples:
Programs are some of the most heavily structured documents that humans make.  Yet 25 years after html and 50 years after hypertext, programmers still use text files.
In html I had written a bit about this:
  • Programs are naturally hypertext-ual objects
  • Since hypertext is universally equated with html, programmers shy away from this option
  • This amounts to a luddite choice
Tabs and indentation
If something like this were more widely known most of these arguments would vanish.
Really the fact that if I like 3 space indents and you like 7, then one of us needs to give in to the other, speaks volumes about how non-configurable our programming systems are.
The only guys (other than telegraph operators) who use courier monospace font today are programmers.  !SWEET!
Currently searching – grep – is entirely regexp based and therefore inherently non-structural.  Therefore replace is also un-structural
And therefore functionality on top of this like vcs-merging remains too low-level
File Formats
Every layman who uses a computer today uses tools having sophisticated file-formats – except the programmer who is stuck with the 50 year old text file!  Programmers invariably balk at anyone questioning this because of the lock-in produced by proprietary formats like doc.  Let us remember that there exist open formats like rtf and pdf and most notably html.
The Real Issue is that

Programs carry 3 levels of Structure

  1. shallow structure – syntax
  2. deep structure – semantics
  3. And between the two, for want of a better word there is the medium structure of programs – or just 'structure'.
Deep structure is essentially an unsolvable problem. Think…
  • Halting Problem
  • Automatic detection and recovery of Errors
  • Objective tests for program beauty
  • etc etc
And so in a fit of depression, programmers have behaved as though shallow structure is all their tools can handle.  This depressed behavior has unfortunately been so successful that it has become the de facto standard. A workable definition of shallow structure is Structure that can be comprehended as a regular language.  The famous Unix tools called 'Programmers Work Bench' (find, grep, sed etc) constitute a decisive step at effectively exploiting this most impoverished of language-classes and so staying within the regular realm. 

Awk/Emacs interesting: In principle both are turing-complete, but in practice both are mostly used in the regular-language manner.  And thence follows the scripting revolution – perl, python, ruby – work elegantly and beautifully on a small scale but scale up badly.

On the other side, traditional programming language implementations invariably degenerate into…

Silos of Medium Structure

  • Medium-structure because deep-structure is an unsolvable problem: A compiler cannot correctly handle a program or a linker handle larger scale code-base if shallow/surface structure is all they address.  So implementations need to dig deeper than the regular level.  On the other hand they avoid getting stuck in the Turing-tarpit (can happen to theorem provers), ie they must skirt the deep level.
  • And so while the implementation of current mainstream languages is a solved problem, it is very hard to extract and re-provision bits and pieces of a language implementation into a library/API.  Thus making these implementation silos. Imagine reprovisioning gcc's parser as an API for an editor to syntax-highlight C source — not hard to imagine in principle but a nightmare if implemented.

Glue — Lisp

The problem is that low-level languages like C provide very poor glue. For example C has no standardized collection types (array is too non-first-class to count) and so frameworks like glib need to create mountains of clumsiness just for minimal communication of components.

The immediate solution to C's low-levelness is supposedly cured in C++, Java at the cost of increased heavy structuring of irreleventia ie the silo-ing only increases.

A notable exception to this is Lisp – see Paul Graham.  Modern languages like python are successful to the extent that they imitate lisp.

Vision for Future

For the last 50 years software has been modularized either on algorithms and functionality.  Ironically the algorithmic module is typically called a function and the functional module is called a class or just module.

I believe that internal DSLs like Parsec and UUAG will show the way to a new age of modularization – an organization that will respect the domain-structures of the problem-domain more than the erstwhile nuts-n-bolts OO and imperative alternatives.

Note: That parsec and UUAG are DSLs is old hat – yacc has been around for 4 decades!  What's new is that they are DSL-generators.

And this also explains why for the current generation of programmers, these ideas are hard and esoteric – for most programmers of today, recursion is hard and meta-circularity is completely out of bounds.

DSL-generators will never click until this hurdle is crossed – programmers need to move away from the notion that they are in complete command of their systems to a more mutual outlook:
  • Yes, we do program our machines
  • Can we see that our machines also program us??
Since our world clearly shapes our thoughts, and our technology is a very central aspect of our world, then that's an important shaper of our thoughts. In short, all humans are programmed by their environment – upbringing, genes, culture etc. Those humans that also program their contexts are programmers. Therefore…

Programmers are recursive humans

Do we know this?

For starters, languages that make recursion hard – ie most current OO and imperative languages – will continue to produce generations of intellectually crippled  programmers.  CS academics needs to move towards languages that make the pervasiveness of recursion and meta-circular thinking more accessible.

More philosophically, Chomsky's universal-grammar and Turing's equivalence has ruled academic computer science and thence IT-praxis for nearly half a century. They have stymied research in CS because they respectively suggest that all computation and languages are somehow the same. When that outlook gives way to Whorf-Wittgenstein's linguistic relativity and thence linguistic determinism, we may then hope that the human-machine cycle can go from being vicious to virtuous.

If we wish the better option, we need to allow the traditional center of CS – machines and algorithms – to yield centre-stage to languages and languaging.


Confessions of a terrible programmer points out that a right combination of discipline, habits, tool-use and programming language (note the soft-hard spectrum) can make a so-called terrible programmer into a good one and is an exception to this blanket statement that programmers use computers terribly.  Reading it prompted me to write this post.

May the exceptions increase!

No comments:

Post a Comment