History of Computing.
UNCOL-(Universal Computer Orientated Language)

Hisotry, Heresy and Heretics.
Nolan Harley
Year 3, University of Bath.

Math0030 - History, Heresy & Heretics
UNCOL - UNiversal Computer Orientated Language

N. B. Davies
N. S. Harley

Hypothesised several years ago, UNCOL (Universal Computer-Orientated Language) has as its basic premise that programs expressed in a problem-orientated language (such as ALGOL) can be translated first into UNCOL and thence into machine code. This translation would involve keeping invariant certain aspects of a program while it adapts those aspects which depend on the machine chosen to execute the program.
Quoted from Philip R. Bagley in 'The Computer Journal' Vol. 4 1961


Programming Portability
The ideal situation with computing is having the ability to run a program on various different computers without the necessity of having to do any rewriting to the program. In order to try and combat this problem we have to investigate the problems of translating a computer program from its source language, in which the program is initially written, into one or more target languages. The target language is generally a machine code for specific computers and hence will vary for different computers. The Source language form of a program can be any of several language levels. It may be any of the following:

  • A machine language for the specific machine it is being written on, such as SAP symbolic language for the IBM 704.
  • Any one of a number of current problem-orientated languages (POL's), such as FORTRAN, C etc.
  • Any POL used in the future, as different languages tend to develop as time progresses and computer resources improves.
  • A language which is largely independent of a specific machine and which is not especially problem orientated, such as ALGOL.

    Why Programs are not Written in Machine Languages
    The earliest machines were very small and simple. For example, as we have seen before, the Manchester Mark 1 produced in 1948 had only 7 opcodes and 32 words of main memory For such a computer, entering programs as sequences of binary digits at a keyboard was not highly complex. But, when writing down these programs it was more convenient to write them down in a shorthand notation for the opcodes and not to use the binary sequence of digits.
    As more complex machines and longer programs developed, hand translation of these notations into binary, for the entry into the computer, became tedious. In the later part of the 1940's it was pointed out that this translation could be performed adequately enough by the computer itself. Programs to do this were known as assemblers, and the notation codes as assembly languages.
    Because of their simplicity, as they were essentially a one-to-one mapping from notation code to machine opcode, assembly language programs used to, and still do, tend to contain an excessive number of words. More complex languages were developed to describe programs more concisely and each one of its instructions could represent several machine code operations, as opposed to this one-to-one mapping. Programs that translate these 'high-level languages' into machine code were more complex than assemblers and became known as compilers. These more complex languages were easier to write programs in than machine languages and hence, became the common languages for writing programs, which would then be translated into the machine language by a compiler.
    Also, machine code is a high level of language and is very complex to write in, which is also a problem when each advance in machine design (especially during the 1950's and 1960's) has been accompanied by an increased complexity of the structure of its language, thus making programming in machine code progressively more costly in time as well as in money. This has also encouraged programs to be written in problem-orientated language.

    Translating this Source Language into a Machine Language

    A compiler, which is a form of translator, will take this 'easy-to-code' language (source language) and translate it into a specific machine code. A compiler will usually supply error and diagnostic information about the source program being compiled.
    During the 1950's and 1960's compilers required a rather difficult process to transform each POL formulated into each ML desired, and additional routines had to be written whenever it was necessary to produce a different ML, from the one belonging to the machine on which the translation process was executed on. This caused the number of individual compilers to increase as it became more desirable for POL's to multiply, for machines to be replaced and for one organisation to have several types of machines.
    It is easy here to say that if a universal POL was used then this problem would be minimised, but this solution is not feasible. This is because there are many varieties of problems, any attempt at universality of problem-orientated languages will result in either inadequacy (such as an attempt to use algebraic language for a logical problem) or such extensiveness as to become useless. If the later case was encountered then the 'universal' POL is really the sum of all POL's and is never truly universal for long, due to the fact that the language must grow to cope with the new classes of problems that arise.
    In order to minimise the number of compilers required, it is best to try and have some form of language between the POL and the machine language, like a universal intermediate language. This would be able to be translated from all POL's and then into all machine languages. This has been proposed in the past and was called the UNCOL project. It's main idea was that a common such a intermediate language would reduce the problem of compiling N languages on M machines from an N*M to an N+M problem.

    What was UNCOL?

    The Idea of UNCOL.

    The use of an intermediate language form, called (UNCOL), is the basis of one proposed method of making programs convertible.Quoted from Philip R. Bagley in 'The Computer Journal' Vol. 4 1961.

    The basic idea of UNCOL is that there is some form in which any program can be expressed which is intermediate between any problem-orientated language and any machine language. Basically, each problem-orientated language can be converted into this one universal language that can then be used on any machine, hence translating this language into any machine language, without the need of any further programming.
    If all problem-orientated languages were compiled into the same intermediate language then each different machine could interpret this language and run the program, hence the program would be 'portable' between different machines.
    This is not as easy as it first sounds. The main problems are that this the universal language must also deal with the 'machine-dependent' and 'machine-independent' parts of the program. Each POL that existed during the 1960's was, to some extent, machine-dependent , as it dealt with information about storage allocation and data organisation etc. These types of details in a program can be difficult to express so that other machines can interpret them to its correct form in their machine language. A solution which would have aided this, and was considered during the UNCOL project, would have been to have some 'human-aid' at this stage. The machine-independent parts of the program can be left unmodified as they are expressed with in the program itself and has no relevance to what machine it is run on.
    One of the first attempts of introducing an UNCOL was made in 1958 by a committee of SHARE (the IBM users association). It was known as the Three Level Concept. (Fig 1 Below). It shows how each POL used with a given machine, would use a Generator to perform the transformation from the POL's into the one Universal Computer Orientated Language (UNCOL). This UNCOL could then be Transformed into a specific Machine language with the use of a Translator. For each machine only one translator would be required, regardless of the number of POL's used, as the translator only considers transforming this one universal language into that specific machine language.
    Figure 1Diagram from 'The Problem of Programming Communication with Changing Machines.' in 'The Association for Computing Machinery Journal, August 1958'.

    Figure 1

    Why UNCOL was unsuccessful.

    It is difficult to explain why various UNCOL attempts weren't successful as it was a large collection of small problems which caused the failure.

    UNCOL was a very ambitious project for its day, and would have required innovations in many areas. At the time that it was proposed, machines were expensive and slow, whilst programmers were cheap in relation. Whereas today the exact opposite is true, machines are cheap and fast and programmers are expensive. Computers were not developed enough to compute a program and also modify the machine dependent parts of the program. It put too much strain on the amount of memory needed. During the 1950's and 1960's, and to some extent even up till the early 1980's, memory was a big problem in computer programming.
    Memory was very expensive as computers were not powerful enough. As we have seen, the break through in this did not really come about until the first disk drive was introduced. This only occurred in 1956 but this was still a very expensive piece of machinery and has not made much change, in the way in which programs were written with using little memory, until fairly recently.
    The UNCOL aspect required more memory than was really available, due to it needing to store information that a simple compiler would not have to, in order not to make the language 'machine-dependent'. At this point in time the progression of an UNCOL was hindered by the cost of memory. Nowadays, there is little restriction on what memory is needed for a particular program and the cost has now transformed from that of memory availability to that of the cost of the programmer. There is now more emphasis on reducing the amount of programming that is need as the cost of the programmer has greatly increased and become the major factor. Therefore, with a universal compiler language and the fact that this would allow programs to run on different machine without any further adaptation the program itself, the idea of an UNCOL has not diminished.

    UNCOL also had to combat the problem of some machines not being specifically adequate for a given program, such as:

  • Lacking sufficient internal storage of the rapid random-access type, thus instruction or data had to brought in as segments of external storage, or even secondary storage.
  • It may have a secondary storage which is randomly accessible at an average interval which is significantly greater than that envisioned by the programmer. Such an example is a program that refers to drum locations at random would cause great problems if it was attempted to run efficiently if tapes were used instead of drums (this is also linked to the next statement.)
  • It may lack a special terminal device, such as (during the 1950's and 60's) a light gun, and have no suitable substitute for it.
  • Some of the characters needed by a program may be lacking from its set of input-output characters.

    This illustrates that some programs cannot be feasibly executed on some machines, and the attempt for an UNCOL to try and cope with these machine-dependent aspects, would cause it to be too general and grossly inefficient.

    The UNCOL idea proved to be over-ambitious to define a single intermediate language adequate for all high-level languages and for all machine architectures. This has not stopped the search for an UNCOL as time has progressed and machines improved due to the fact that it will aid many improvements in computing and also, prove to be a worthwhile investment. Some of the benefits of an UNCOL are outline later on.

    Problem with finding an UNCOL is that it's really hard, on a par with, say, automated translation of poetry from English into Chinese.

    How the Idea of an UNCOL has Evolved.

    Later work on compilers and intermediate languages in fact fulfilled many of the promises of the UNCOL work. Those innovations, together with the widespread use of portable software and open systems made a universal distribution format economically worthwhile. Hence the search for an UNCOL has been ongoing

    As it was thought that UNCOL was over-ambitious other attempts were made at separate areas of the problem. Janus was attempted, which only dealt with the problem of discovering a universal intermediate language suitable for the implementation on any target machine architecture.
    The idea behind the development of Janus is that the front-end of a compiler generating Janus should perform as much machine-independent language processing as possible without removing any information about the source program that could possibly be of use to any back-end (with the front-end of a compiler concentrating on generating a intermediate language, and the back-end concentrating on translating this intermediate language into the target language). The general idea of this is that Janus should specify what must be done but without laying down how it should be done. This means that, although the Janus abstract machine is at a fairly low level, some features of the language are at high-level, generally, the area where it is required to accommodate the wide variation of different machines.
    The Janus abstract machine has been modified over time in the light of experience to provide a model that is closer to the ideal universal intermediate language that fully specifies what is to be done without restricting how it is to be accomplished. Such modifications are that the original machine included accumulator, index register and stack; subsequently the accumulator was incorporated into the stack, but operations on the stack can still specify at most one operand when required.
    Janus has been used as an intermediate language in portable compilers for Pascal and BCPL. The language has also been used in the design studies for the ALGOL 68 compiler and directly for the coding of portable mathematical software.
    Janus is a good example of how the idea of segments of the UNCOL project have been attempted separately, and then modified to try and deal with the whole problem.


    UNCOL was the first step in three decades' of work in software portability and compiler design whose culmination is ANDF.

    ANDF is an Architecture and language Neutral Distribution Format. It compiles your high-level source code to machine-independent byte code. This byte code can be executed on any machine (on which an ANDF installer is located) without further portability concerns. The ANDF installer then converts the byte code to the computer's machine code for execution.
    This technology allows application developers to transport their applications to any machine, regardless of operating system, hardware architecture, or language development environment. Instead, portability issues are handled by the ANDF installer for the specific machine.
    However ANDF is not a portability tool which will mechanically turn a non-portable program into a portable one, it is a distribution format for portable software which must thus provide mechanisms needed by portable software. Portable software is defined as software that can easily be moved from one execution environment to another.
    It is believed that if ANDF becomes popular it will not remain machine-independent as there could be serious commercial advantage in building hardware which is ANDF compatible.

    These are only some of the attempts in creating an UNCOL. Many have been unsuccessful and are rarely talked about, thus making information on this very difficult to obtain. Also, there have been many attempts of creating parts of an UNCOL with the hope of developing these into a true UNCOL when machine development and human knowledge has the ability to do so.


    The idea of an UNCOL seemed to be good at the time as it brought many benefits with it. Some of these are:

    The UNCOL system will allow programs to be run on more than one different type of machine without the need for any changes to be made to the program, thus no additional programming costs, or time, are involved. This would mean that the software is portable, meaning that the program can be, with little effort, made to run on computers other than the one which it was originally written for.
    This also enables new computers to only require another translator to be made, in order to translate UNCOL into the new machine language, so that all previous programs made can be run on it easily, which should lead to exploit to the fullest the outstanding features of each machine.
    No POL that already exists, or new POL invented, ever has to become obsolete, as if a system programmer writes a generator to translate the POL into UNCOL the routines written in it can be executed on any machine at any time in the future.

    A personal view is that the idea of a universal intermediate language was a good one even though it has not been very practical to date. But, with the technological improvements that have gone on, and future ones, the idea will become implemented in its full context. Also, with the emphasis on the cost of programmers outweighing the cost of the machinery, the need for an UNCOL has increased.

    e-mail: nolan.harley@ntlworld.com