Description of the RECRE project

1. Introduction

The goal of this project is to get a better understanding of the proofs-as-programs correspondence for classical logic using the recent advances of the theory of classical realizability.

Currently, this correspondence is very well developed and understood in the case of pure functional programming, without side effects nor control. It has remarkable outputs both on the theoretical and practical sides: rich models of intuitionistic logic and constructive mathematics have been studied, proof assistants and theorem provers have been developed and used to prove significant results, and mathematical models of functional computation have been used for program specification and certification, in particular using the technique of program extraction from mathematical proofs.

Following the discovery by Griffin in 1990 of a correspondence between classical logic and control operators in functional programming, Krivine developed classical realizability as a complete reformulation of the principles of realizability in the context of classical reasoning. At the core is the idea that realizers of a given formula are programs that respect a specification associated with the formula, and that specifications are formulated in terms of the interaction of a program with its environment. Quite surprisingly, this construction can be seen as a generalization of the model theoretic notion of forcing as introduced by Cohen in the 60's to prove independence results in set theory, such as the independence of the continuum hypothesis. Thanks to the theory of classical realizability, such axioms were recently given a computational content in terms of very concrete programming features, including a global clock and memory management primitives. This striking connections naturally suggest new directions to extend the proofs as programs paradigm beyond the scope of functional programming.

The aim of our project is to study and develop this computational interpretation of logic and the technology of classical realizability itself, using the expertise of researchers from several neighboring areas. Indeed, the proofs-as-programs correspondence is intrinsically interdisciplinary since its domains range from mathematical logic to programming language design through category theory and theoretical computer science. We plan to investigate the connections between realizability, forcing and side effects as well as the possible applications of this technology to program extraction. We also plan to develop and systematize realizability techniques in concurrent programming and low level languages, while studying the algebraic structures underlying realizability using category theory and game semantics.

2. Context and objectives of the proposal

2.1. Context of the proposal

General goal

The goal of this project is to reinforce and extend the bridge between computer science and logic through the proofs-as-programs correspondence (a.k.a. the Curry-Howard correspondence) using the new ideas, analogies and tools provided by the most recent developments of the theory of realizability. On the side of computer science, this correspondence is essentially limited to pure functional programming, and we would like to extend it to a wider scope of programming features, such as imperative programming (side effects, references, input/output) and concurrent programming (parallelism and non determinism, message passing, scheduling).

The Curry-Howard correspondence

The Curry-Howard correspondence is a major theoretical tool that was developed in the last fifty years by mathematicians and computer scientists to bridge the gap between the concepts of logic (mainly proof theory) and computer science (mainly functional programming). The key idea of this correspondence is that each mathematical proof (formalized in a suitable deduction system) corresponds to a computer program (expressed in a suitable programming language). This formal correspondence establishes a dictionary between the concepts of both theories.

Theoretical output

The discovery of the correspondence between proofs and programs brought a deep renewal to proof theory, giving a central role to the notion of cut elimination (the proof-theoretic equivalent of computation), and connecting it to the theory of rewriting. It also had a major impact on the further developments of denotational semantics, categorical semantics and game semantics (in connection with linear logic). Two major byproducts of this correspondence are type theory (and its variants such as the calculus of con- structions implemented in the Coq proof assistant) and linear logic, that is now a standard tool to analyze proofs in intuitionistic and classical logics.

Practical output

The proofs-as-programs correspondence led to the development of proof assistants such as Coq, MinLog, Agda, Plastic, NuPrl, which are based on type theory and its variants. It also deeply influenced the design of functional programming languages such as SML, Caml, Haskell. It is worth recalling in particular that the Coq proof assistant and the Caml programming language were initially developed in the same INRIA project Formel led by Gérard Huet. Coq and its accompanying tools are now daily used in the academia and in the industry, mainly for the development of libraries of mathematical theories and as a tool for program certification. Another application of the Curry-Howard correspondence is program extraction, by which a formal existence proof of a function satisfying a given specification can be turned into a program implementing the corresponding function. Program extraction is now well understood in constructive mathematics, and it is implemented in the Coq proof assistant.


For a long time, the correspondence between proofs and programs has been limited to intuitionistic logic and constructive mathematics on the logical side, and to functional and sequential programming on the programming side - thus leaving aside parallel, non deterministic and concurrent behaviors. Even today, it still hardly takes into account imperative features (side effects). A major breakthrough went in 1990 with the discovery by T. Griffin of a strong connection between classical (i.e. non constructive) reasoning principles (the law of excluded middle, reductio ad absurdum) and control operators in programming languages. This connection has been widely studied from the point of view of deduction and typing, especially in connection with polarized forms of linear logic, and many classical extensions of the lambda-calculus have been proposed since. Among these extensions, we can find calculi of delimited continuations that contain an operator for delimiting the scope of continuations. Delimiters significantly increase the expressiveness of computation: Filinski showed in 1994 that delimited continuations are expressive enough to simulate all `monadic' effects; Felleisen and Sitaram for call-by-value computation, and Saurin for call-by-name computation showed that delimiters allow to gain observational completeness properties in control calculi.


Realizability can be seen as a generalization of typing where the relationship between programs and types is not defined from a type system (corresponding to a deduction system in logic), but from the very computational properties of the program w.r.t. its type (seen as a specification). For this reason, realizability is the appropriate tool for investigating program extraction. And due to its enhanced features (compared with typing), it naturally constitutes a fruitful source of inspiration for the extension of existing type systems as well as for the design of new ones.

Just as in the case of typing, realizability was for a very long time limited to intuitionistic logic, and it is only in the 90's that J.-L. Krivine (following Griffin's discovery) developed the theory of classical realizability, which is a complete reformulation of the very principles of realizability in the framework of classical logic. Quite strikingly, the theory of classical realizability appears to be strongly connected with P. Cohen's theory of forcing (that was introduced to prove the independence of the continuum hypothesis), one of the main sources of inspiration of Krivine's work. Recent works about the connection between classical realizability and forcing revealed intriguing analogies with imperative programming features and concurrent programming, which suggest new promising research directions for the extension of the Curry-Howard correspondence to a wider scope of programming features.

2.2 State of the art and positionning of the proposal

International and national context

The correspondence between proofs as programs is a widely investigated research program whose kernel can be found in the European Union: France (Paris, Lyon, Marseille, Chambéry), Italy (Torino), the United Kingdom (London, Manchester, Edinburgh), Germany (München, Darmstadt), Netherlands (Nijmegen), Denmark (Copenhagen), Sweden (Stockholm, Göteborg), ... It is also studied in the United States (Cornell), in Canada (Ottawa) and Japan (Tokyo, Kyoto). By its very nature, the Curry-Howard correspondence unites the work of researchers with a strong background in computer science and in mathematics. In this context, France enjoys a unique position due to a longstanding tradition in proof theory (cut elimination, normalization, rewriting, linear logic, type systems and type theory) combined with an important effort towards the applications (Caml, Coq and its accompanying tools, formalization of mathematics, program certification, etc.) The Curry-Howard correspondence is investigated or simply used as a tool by many research teams (in CNAM, Paris 6, Paris 7, Paris 12, Paris 13, ENS Lyon, Université de Savoie, Institut de mathématiques de Luminy, INRIA Paris-Rocquencourt, INRIA Saclay, INRIA Nice Sophia Antipolis) whose domains of expertise cover a wide range going from the most theoretical aspects to the most practical ones.

State of the art

The Curry-Howard correspondence is now well understood in the framework of intuitionistic logic and constructive mathematics, both from the point of view of typing/deduction and from the point of view of realizability. The semantics of the corresponding logics (including linear logic, a symmetrized version of intuitionistic logic) and programming languages have been intensively studied, using the tools of category theory, rewriting, denotational semantics and game semantics. When restricted to intuitionistic logic, these theoretical tools are now mature enough to find their applications in computer science. Several proof assistants have been developed upon these principles (Coq, MinLog, Agda), and most of them provide facilities to extract functional programs (expressed in SML, Caml, Haskell) from constructive proofs formalized in various type systems including type theory and its variants.

However, for a long time, the computational contents of classical proofs was only studied indirectly, via clever translations towards intuitionistic logic (using the so called negative translations) or towards linear logic (using modal translations, and later polarization). One of the main results achieved by these translations is the conservativity of classical logic w.r.t. intuitionistic logic for the class of Π02-formulas, thus allowing constructive extraction techniques to be used for classical proofs in this particular case. These indirect techniques have been successfully used to understand the computational contents of many classical proofs, but as often as not, they seem to hide the actual structure of the original proof.

From negative translations to classical lambda-calculi

The discovery by Griffin in 1990 of a direct connection between classical reasoning principles and control operators [Gri90] led to the rise of many classical extensions of the lambda-calculus, including deterministic calculi (Parigot's lambda-mu calculus, Krivine's lambda-c-calculus, Curien and Herbelin's lambda-bar-mu-mu-tilde-calculus) and non deterministic ones (Barbanera and Berardi's symmetric lambda-calculus). These new formalisms have been related to more standard intuitionistic type systems using the so-called CPS-translations, which appeared to be the computer scientific equivalent of the negative translations from classical logic to intuitionistic logic. Similar translations have been designed towards polarized linear logic, and the semantics of these calculi have been studied using the standard tools of denotational semantics, category theory, rewriting and game semantics.

These classical lambda-calculi were initially developed to provide classical logic with a proof theory enjoying confluent cut-elimination, a crucial feature for denotational semantics. The main interest of these classical lambda-calculi is that they provide direct computational counterparts to classical proofs, without using intermediate translations to intuitionistic logic - this providing a nice and direct description of the computational contents of classical proofs. However, the understanding of the properties of extracted programs suffers from the fact that classical lambda-calculi have been studied mainly from the point of view of typing, a point of view which is more suited to logic and deduction than to computation. This partly explains why such calculi have not been practically used to implement program extraction from classical proofs: we know how to extract a program from a classical proof in the general case, but we hardly know what this program computes.

Classical realizability

The theory of classical realizability has been introduced in the 90's by J.-L. Krivine as a complete reformulation of intuitionistic realizability for classical logic. This theory has been initially introduced in the framework of second-order Peano arithmetic, before it was extended to (classical) Zermelo-Fraenkel set theory, using model constructions in the spirit of forcing. In this framework, Krivine showed how to interpret the dependent axiom of choice (an axiom which is involved in many proofs in analysis) and how to combine realizability with forcing in the spirit of iterated forcing.

In the last few years, Krivine's realizability model has been extended into a classical realizability model for the calculus of constructions with universes (the core formalism of the Coq proof assistant), which made possible to interface Krivine's theoretical results with Coq. A small prototype of classical extraction has been developed in Coq upon this extension of the theory of realizability.

Delimited continuations, effects and call-by-need

The original framework of delimited continuations (by Danvy and Filinksi) tends to be now well understood, especially due to the works of Herbelin (with Ariola, Sabry and Ghilezan), Zeilberger, Saurin, Munch-Maccagnoni. Understanding the role that delimited continuations may play in logic is however still at its very beginning. Recently, Herbelin and Ilik showed that by restricting delimiters on base types (what corresponds on the logical side to 01 formulas), we obtain intuitionistic logics that prove weakly classical statements such as Markov's principle and the double negation shift, opening the way to generalize intuitionistic logic to other kinds of delimited effects that might prove or realize other kinds of standard axioms. For instance, one of this effect is call-by-need (or lazy) evaluation, which can be simulated in a language of delimited continuations, as shown by Garcia, Lumsdaine and Sabry. If one thinks that one of the major difference between proofs and programs is that the formulas that type the proofs are types that may depend on proofs (think for instance about the axiom of choice which reflects in the choice function some of the values hidden in the proof), then call-by-need lambda-calculus extended with control might be the good effectful framework the logic is looking for: at the level of typing, it has exactly the same types as ordinary lambda-calculus with control; but at the level of computations, it is able to share computations (as a memoization effect) and to ensure that (classical) proofs possibly pervading in the objects of the discourse (as the axiom of choice does) are consistently defined wherever they are referred to in the formulas.

Axioms as primitive instructions

Krivine's version of the classical lambda-calculus is called the lambda-c-calculus. However, the "c" stands both for "classical" and for "constants". This calculus is designed in such a way that makes it very easy to add new instructions, with their reduction rules, to the core language. Because the main properties are very modular, only very little checking is required to make sure all the concepts of realizability remain valid. The kind of new instructions that have been used by Krivine have a very "low level" computational content: one is simply a hardware "clock" giving the time elapsed since the beginning of computation, while some other have to do with some kind of management for a single memory cell. Those instructions where specifically designed in order to realize high-level mathematical axioms like the axiom of dependent choice or the existence of a non-principal ultrafilter on the powerset of the natural numbers. The interplay between those low-level programming instructions and those high-level mathematical principles is rather striking. Preliminary investigation by Miquel tends to show that by adding an explicit notion of "state" to Krivine's calculus, we can make those instructions even simpler (they simply manipulate the state) while making the link with forcing clearer. We thus hope to continue extending this simple "assembly language" for realizing more and more principles of classical mathematics.


More recently, many connections between linear logics and concurrent programming have been investigated with success using realizability. Formal realizability interpretations in concurrent processes were proposed, first by Abramsky in an adapted (and slightly ad-hoc) variant of CCS [Abr01], and then in various successively refined forms in other calculi [Bef06]. Realizability models for linear logic can be built in such frameworks, using principles similar to those of classical realizability [Bef05], and classical realizability itself can be seen as a fragment of larger systems based on concurrent processes. Such models are very similar in spirit to those used for specification formalisms for processes, including modal logics à la Hennessy-Milner [HM85] and logics for spatial reasoning about memory states [OHP99]. However, a good correspondence between proofs and programs is at the moment only achieved for very restrictive classes of concurrent programs, essentially those that behave like -terms with classical control operators [BHY01,HL10]. On the other side, logical systems actually developed for concurrency do not enjoy good semantic properties from a proof-theoretical perspective. This is partly because the community and their goals are very different. Reaching a correspondence as accurate and productive in the concurrent world as it was for intuitionnistic logic and functional programming requires an important research effort involving both communities. The novel ideas brought by classical realizability are promising in this respect, and several works already demonstrate how to concurrent features can be implemented in the framework: thread synchronization [DK00], communication protocols [KL07], explicit scheduling [Miq09c, chapter 5].

2.3. Objectives, originality and/or novelty of the proposal

The goal of this project is to extend the correspondence between proofs and programs (the Curry-Howard correspondence) beyond the scope of traditional functional languages. We plan to incorporate features of imperative languages (side effects, references, input/output) as well as features of concurrent programming languages such as the pi-calculus (parallelism and non determinism). As faras possible, we would like to test our intermediate results by integrating them within a proof assistant. (The integration of classical realizability within the Coq proof assistant constitutes a first promising example.) By its very nature, such a research program requires a long term effort that combines the expertise in various research fields, such as type systems, realizability, forcing, model theory, linear logic, category theory, game semantics, concurrent programming and rewriting. Many decades were necessary to design and to understand the proofs-as-programs paradigm in the simple case of intuitionistic logic and pure functional programming. The problems we want to tackle appear to be much more difficult than in the intuitionistic/functional case, and we do not expect to solve all these problems within the limited duration of the project.

Realizability rather than typing

The correspondence between proofs and programs has been studied (at least in France) mainly from the point of view of typing and deduction. Realizability is a more expressive setting than typing, since it focuses on the computational behavior of extracted programs rather than on the way these programs are constructed. In practice, typing is included into realizability, which means that correctness w.r.t. deduction implies correctness w.r.t. computation. But realizability provides much more interesting features, and some of them can be used in the future as a source of inspiration for the extension of type systems. Such an example is given by Spector's analysis of the axiom of dependent choices in terms of intuitionistic realizability (via a negative translation), which became the basis of the extension of Gödel's system T with bar recursion. However, it is important to mention that realizability results can be used directly in extraction procedures without being incorporated in type systems. When considering program extraction, correctness has to be understood in the sense of computation (as given by realizability models), not in the sense of deduction (as given by type systems). As an example, the current prototype of classical extraction in Coq already uses this subtle distinction to replace on the fly some realizers constructed from derivations (i.e. obtained by deduction) by optimized versions of these realizers that are computationally equivalent to the original ones, but that are meaningless according to deduction.

Realizability is more flexible than typing, which is best illustrated with the original design of the theory of classical realizability. The main novelty of Krivine's approach (compared to more standard approaches based on typing) lies in the use of an extensible (classical) lambda-calculus. Indeed, Krivine's lambda-c-calculus (on which the theory of classical realizability is based) can be enriched with many extra instructions accompanied with their computation rules, and most realizability results hold without making assumptions on which instructions are actually present in the calculus ­ an important property we shall call the 'open world assumption'. This sharply contrasts with the standard approach of typed classical lambda-calculi, whose properties rely on an implicit 'closed world assumption': the results hold as long as we do not go beyond the syntax and computation rules that have been fixed by the formalism.

Far from being connected to a particular family of formalisms, the 'open world assumption' is intrinsic to the approach of realizability. We can thus expect to benefit from this fundamental property with many different calculi based on different programming paradigms (imperative programming, concurrency) combined with different logical frameworks (classical logic, modal logic, linear logic). Also notice that this approach makes the connection with real programming languages (that are already highly extensible) more realistic.

Combining different theoretical approaches

Past experience has shown that the correspondence between proofs and programs cannot be fully understood or analysed from a single point of view. Since we want to address different programming paradigms (functional programming, imperative programming, concurrent programming), we need to carry on our study in many different logical frameworks (intuitionistic logic, classical logic, linear logic and differential linear logic). We also need to study the problem by combining different theoretical tools, such as realizability, model theory, category theory, rewriting and game semantics. Each of these tools belongs to the domain of expertise of one or several participants of the project.