Saturday, 26 March 2011

Functional programming

In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state.[1] Functional programming has its roots in lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as elaborations on the lambda calculus.[1]

In practice, the difference between a mathematical function and the notion of a "function" used in imperative programming is that imperative functions can have side effects, changing the value of program state. Because of this they lack referential transparency, i.e. the same language expression can result in different values at different times depending on the state of the executing program. Conversely, in functional code, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) both times. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming.[1]

Functional programming languages, especially purely functional ones, have largely been emphasized in academia rather than in commercial software development. However, prominent functional programming languages such as Scheme,[2][3][4][5] Erlang,[6][7][8] Objective Caml,[9][10] and Haskell[11][12] have been used in industrial and commercial applications by a wide variety of organizations. Functional programming also finds use in industry through domain-specific programming languages like R (statistics),[13][14] Mathematica (symbolic math),[15] J and K (financial analysis)[citation needed], F# in Microsoft .NET and XQuery/XSLT (XML).[16][17] Widespread domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, especially in eschewing mutable values.[18] Spreadsheets can also be viewed as functional programming languages.[19]

Programming in a functional style can also be accomplished in languages that aren't specifically designed for functional programming. For example, the imperative Perl programming language has been the subject of a book describing how to apply functional programming concepts.[20] JavaScript, one of the most widely employed languages today, incorporates functional programming capabilities.

History

Lambda calculus provides a theoretical framework for describing functions and their evaluation. Although it is a mathematical abstraction rather than a programming language, it forms the basis of almost all functional programming languages today. An equivalent theoretical formulation, combinatory logic, is commonly perceived as more abstract than lambda calculus and preceded it in invention. It is used in some esoteric languages including Unlambda. Combinatory logic and lambda calculus were both originally developed to achieve a clearer approach to the foundations of mathematics .[22]

An early functional flavored language was LISP, developed by John McCarthy while at MIT for the IBM 700/7000 series scientific computers in the late 1950s.[23] LISP introduced many features now found in functional languages, though LISP is technically a multi-paradigm language. Scheme and Dylan were later attempts to simplify and improve LISP.

Information Processing Language (IPL) is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of "generator", which amounts to a function accepting a function as an argument, and, since it is an assembly-level language, code can be used as data, so IPL can be regarded as having higher-order functions. However, it relies heavily on mutating list structure and similar imperative features.

Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid 1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries.

John Backus presented FP in his 1977 Turing Award lecture Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs. He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style which has come to be associated with functional programming.

In the 1970s ML was created by Robin Milner at the University of Edinburgh, and David Turner developed initially the language SASL at the University of St. Andrews and later the language Miranda at the University of Kent. ML eventually developed into several dialects, the most common of which are now Objective Caml and Standard ML. Also in the 1970s, the development of Scheme (a partly-functional dialect of Lisp), as described in the influential Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs, brought awareness of the power of functional programming to the wider programming-languages community.

In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called Constructive type theory), which associated functional programs with constructive proofs of arbitrarily complex mathematical propositions expressed as dependent types. This led to powerful new approaches to interactive theorem proving and has influenced the development of many subsequent functional programming languages.

The Haskell language began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing since 1990.

Concepts

A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object oriented programming). However, programming languages are often hybrids of several programming paradigms so programmers using "mostly imperative" languages may have utilized some of these concepts.[24]
First-class and higher-order functions
Main articles: first-class function and higher-order function

Higher-order functions are functions that can either take other functions as arguments or return them as results (the differential operator d / dx that produces the derivative of a function f is an example of this in calculus).

Higher-order functions are closely related to first-class functions, in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term that describes programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values).

Higher-order functions enable partial application or currying, a technique in which a function is applied to its arguments one at a time, with each application returning a new function that accepts the next argument. This allows one to succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Pure functions

Purely functional functions (or expressions) have no memory or I/O side effects. This means that pure functions have several useful properties, many of which can be used to optimize the code:

    * If the result of a pure expression is not used, it can be removed without affecting other expressions.
    * If a pure function is called with parameters that cause no side-effects, the result is constant with respect to that parameter list (sometimes called referential transparency), i.e. if the pure function is again called with the same parameters, the same result will be returned (this can enable caching optimizations such as memoization).
    * If there is no data dependency between two pure expressions, then their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe).
    * If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation).

While most compilers for imperative programming languages detect pure functions, and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 allows functions to be designated "pure".
Recursion
Main article: Recursion (computer science)

Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, allowing an operation to be performed over and over. Recursion may require maintaining a stack, but tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. The Scheme language standard requires implementations to recognize and optimize tail recursion. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches.

Common patterns of recursion can be factored out using higher order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such higher order functions play a role analogous to built-in control structures such as loops in imperative languages.

Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. See Turner 2004 for more discussion.[25]
Strict versus non-strict evaluation
Main article: Evaluation strategy

Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm will itself fail. For example, the expression

 print length([2+1, 3*2, 1/0, 5-4])

will fail under strict evaluation because of the division by zero in the third element of the list. Under nonstrict evaluation, the length function will return the value 4, since evaluating it will not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Non-strict evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.

The usual implementation strategy for non-strict evaluation in functional languages is graph reduction.[26] Non-strict evaluation is used by default in several pure functional languages, including Miranda, Clean and Haskell.

Hughes 1984 argues for non-strict evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams.[27] Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis.[28] Harper 2009 proposes including both strict and nonstrict evaluation in the same language, using the language's type system to distinguish them.[29]
Type systems, polymorphism, algebraic datatypes and pattern matching

Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, as opposed to the untyped lambda calculus used in Lisp and its variants (such as Scheme). The use of algebraic datatypes and pattern matching makes manipulation of complex data structures more convenient and expressive; the presence of strong compile-time type checking makes programs more reliable, while type inference frees the programmer from the need to manually declare types to the compiler.

Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which allows types to depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in predicate logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the C programming language that is written in Coq and formally verified.[30]

A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently-typed programming while avoiding most of its inconvenience.[31] GADT's are available in the Glasgow Haskell Compiler and in Scala (as "case classes"), and have been proposed as additions to other languages including Java and C#.[32]
Functional programming in non-functional languages

It is possible to employ a functional style of programming in languages that are not traditionally considered functional languages.[33] Amongst imperative programming languages, the D programming language's solid support for functional programming stands out. For example, D has a pure modifier to enforce functional purity. Only Fortran 95 has something similar.[34]

First class functions have slowly been added to mainstream languages. For example, in early 1994, support for lambda, filter, map, and reduce was added to Python. Then during the development of Python 3000, Guido van Rossum called for the removal of these features. So far only the reduce function has been removed.[35] First class functions were also introduced in PHP 5.3, Visual Basic 9, C# 3.0, and C++0x.

The Language Integrated Query (LINQ) feature, with its many incarnations, is an obvious and powerful use of functional programming in .NET.

In Java, anonymous classes can sometimes be used to simulate closures;[citation needed] however, anonymous classes are not always proper replacements to closures because they have more limited capabilities.

Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold.

The benefits of immutable data can be seen even in imperative programs, so programmers often strive to make some data immutable even in imperative programs

Comparison of functional and imperative programming

Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming disallows side effects completely. Disallowing side effects provides for referential transparency, which makes it easier to verify, optimize, and parallelize programs, and easier to write automated tools to perform those tasks.

Higher order functions are rarely used in older imperative programming. Where a traditional imperative program might use a loop to traverse a list, a functional style would often use a higher-order function, map, that takes as arguments a function and a list, applies the function to each element of the list, and returns a list of the results.
Simulating state

There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way.

The pure functional programming language Haskell implements them using monads, derived from category theory. Monads are powerful and offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).[37]

Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make explicit the presence of side effects.
Efficiency issues
    This section may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (May 2009)

Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal.[38] For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree).[citation needed] However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as Objective Caml and Clean are only slightly slower than C.[39] For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimization.

Further, immutability of data can, in many cases, lead to execution efficiency in allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.[citation needed]

Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks when used improperly)[citation needed].
Coding styles

Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.

// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
 int first=0, second=1; // seed values

 for (int i=0; i<iterations; ++i)
 {
  int sum = first + second;
  first = second;
  second = sum;
 }

 return first;
}

std::cout << fibonacci(10) << "\n";

A functional version (in Haskell) has a different feel to it:

-- Fibonacci numbers, functional style

-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)

-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1

-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)

The imperative style describes the intermediate steps involved in calculating fibonacci(N), and places those steps inside a loop statement. In contrast, the functional implementation shown here states the mathematical recurrence relation that defines the entire Fibonacci sequence, then selects an element from the sequence (see also recursion). This example relies on Haskell's lazy evaluation to create an "infinite" list of which only as much as needed (the first 10 elements in this case) will actually be computed. That computation happens when the runtime system carries out the action described by "main".

Use in industry

Functional programming has a reputation for being of purely academic interest[citation needed]. However, several prominent functional programming languages have been used in commercial or industrial applications. For example, the Erlang programming language, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems.[7] It has since become popular for building a range of applications at companies such as T-Mobile, Nortel, and Facebook.[6][8][40] The Scheme dialect of Lisp was used as the basis for several applications on early Apple Macintosh computers,[2][3] and has more recently been applied to problems such as training simulation software[4] and telescope control.[5] Objective Caml, which was introduced in the mid 1990s, has seen commercial use in areas such as financial analysis[9], driver verification, industrial robot programming, and static analysis of embedded software.[10] Haskell, although initially intended as a research language,[12] has also been applied by a range of companies, in areas such as aerospace systems, hardware design, and web programming.[11][12]

Other functional programming languages that have seen use in industry include Scala,[41] F#,[42][43] Lisp,[44] Standard ML,[45][46] and Clojure.