Do Functional Languages have a Real Future in Scientific Programming?

Dorothy Bollman


It could be said that the year 2000 marks the fortieth birthday of the functional programming paradigm. It was in 1960, only a few years after the advent of FORTRAN, that John McCarthy [J. McCarthy, Recursive functions of symbolic expressions and their computation by machine, Comm. ACM 3(4), 1960] published a definition of what was the first computer implemented functional language, LISP. What is the state of health of the functional programming paradigm forty years later? How have functional languages fared as languages for applications, especially in parallel computing?

To begin, what exactly is a functional programming language? Simply speaking, we could define a functional programming language as a programming language in which computational abstractions are expressed by means of functions. Programs written in this style are thus more similar to ordinary mathematical notation than those written in an object oriented language such as FORTRAN or C, or those written in an object oriented language such as C++ or Java. Other features that are common to functional languages are higher order functions, in which functions can be both function arguments and function values, and non-strict semantics, i.e., lazy evaluation, in which function arguments are not evaluated until needed. Another characteristic of functional languages is the absence of side effects, i.e. the evaluation of expressions does not involve storing new values in memory cells. Thus, functional languages generally do not contain assignments or else they obey the single assignment rule, which stipulates that a name can receive a value only one in each name scope.

Since their inception, functional languages have been advocated to solve a variety of ills that have beset more traditional languages. These include the inadequacy of traditional languages in helping solve the software crisis, the difficulties of defining the semantics of these languages, and their inadequacies for parallel computation. How have functional languages fared in solving these problems?

The software crisis, which continues even today, was first identified in the 1960's when it was realized that the development of software was not keeping pace with the development of hardware. Software developers have not been sufficiently productive to keep up with the demands presented by new architectures. The cost of software development has exceeded that of hardware development.

It has been argued that one of the reasons contributing to the software crisis is that traditional languages are not sufficiently expressive. An algorithm translated into a functional language is much shorter than its translation to a more primitive language. It has been shown that programmers produce a fixed number of lines of code per year, regardless of the programming language that they use. Thus, the productivity of a programmer who uses a very high level functional programming language can far exceed the productivity of a programmer who uses a more primitive language.

Another obstacle in software development has been attributed to the difficulties in proving properties of programs, especially program correctness. A very important aspect of functional languages is that they enjoy mathematical properties that allow them to be used as both the specification language and the programming language, or at least allow for a specification language that is very close to the programming language, thus simplifying tasks of verifying properties of programs. The denotational semantics has been a particularly successful approach for defining the semantics of functional languages.

One of the best features of functional languages is their suitability for implementation on parallel architectures. Conventional languages are generally oriented toward a specific sequential machine model. In a functional language, there are no operational constraints such as order of evaluation of function arguments. There is a large body of theory and experience in implementing various functional compilers and interpreters on a wide variety of parallel architectures. There is also compelling evidence that at least some of these implementations (e.g., Sisal [D. Cann, Retire FORTRAN? A Debate Rekindled, Comm. ACM 30(8), 81–89, 1992]) perform as well or better than their conventional language extension counterparts.

Interest in functional languages was considerably increased after John Backus' acceptance of the 1977 ACM Turing Award. In his acceptance speech Backus [J. W. Backus, Can programming be liberated from the von Neumann style? A functional style and its algebra of programs, Comm. ACM, 21(8), 613-641, 1978] criticized imperative languages such as FORTRAN, in whose development he himself played a leading role, and advocated the functional language paradigm for both its expressive power as well as its semantic advantages.

Given the advantages of functional programming languages, why aren't they used more extensively? The most widely used functional language is undoubtedly LISP, at least in the United States. However, its use is generally restricted to applications in artificial intelligence and these applications involve mostly versions, which have added so many imperative features that, the language can no longer be considered as purely functional. The LISP dialect, Scheme, which has retained most of the features of pure functional languages, is probably the most common functional languages used for instruction in the United States. However, only a relatively small number of U.S. universities use Scheme in computer science courses (Reid Report []).

The reason that functional languages are not more widely used is, in our opinion, their apparent lack of appeal to scientific programmers, and thus a lack of support within industry and government for their use and development. Functional languages represent a programming style substantially different from the imperative programming style that has always been the standard for scientific programming. However, the ordinary scientific programmer does not always appreciate their mathematically sophisticated semantics.

A good example is given by the history of Sisal, a functional language that was designed for large-scale scientific application programming and has been implemented on a variety of parallel architectures. It performs especially well on vector machines. After supporting the development of Sisal for more than ten years, Lawrence Livermore National Laboratory withdrew its support in 1995, even in spite of its proven performance. The reason was apparently the lack of interest and wide spread by the scientific programming community, including even that at Lawrence Livermore.

Very few functional languages provide explicit parallel constructs. Rather, it is the compiler that extracts parallelism. The programmer can thus optimally parallelize a function program with little or no effort.

Functional programming languages could thus play a crucial role in reducing development costs and improve portability, while at the same time yielding good performance. J. Feo et al [J. Feo, P. Miller, S. Skedzielewski, S. Denton, and C. Solomon, Sisal 90, in Proceedings High Performance Functional Computing, Lawrence Livermore National Laboratory, Livermore, CA 1995] suggest that much of the code in a large scientific code is neither parallel nor functional and that this part of the program could be written in an imperative language such as FORTRAN while the parallel core could be written in a functional language. It would thus appear that the future of functional languages for scientific programming depends on the future development and support for mixed language programming or perhaps the development of libraries of functional constructs that would play a role similar to that of the MPI (message passing interface) paradigm.

Dorothy Bollman,
University of Puerto Rico




  • There are currently no refbacks.