previous   next   up   top

Tagless-Final Style: Applications and Examples

 

This page collects examples and sample DSLs implemented in the typed final (``tagless-final'') style.


 

Ordinary and one-pass CPS transformation

We demonstrate ordinary and administrative-redex--less call-by-value Continuation Passing Style (CPS) transformation that assuredly produces well-typed terms and is patently total.

Our goal here is not to evaluate, view or serialize a tagless-final term, but to transform it to another one. The result is a tagless-final term, which we can interpret in multiple ways: evaluate, view, or transform again. We first came across transformations of tagless-final terms when we discussed pushing the negation down in the simple, unityped language of addition and negation. The general case is more complex. It is natural to require the result of transforming a well-typed term be well-typed. In the tagless-final approach that requirement is satisfied automatically: after all, only well-typed terms are expressible. We impose a more stringent requirement that a transformation be total. In particular, the fact that the transformation handles all possible cases of the source terms must be patently, syntactically clear. The complete coverage must be so clear that the metalanguage compiler should be able to see that, without the aid of extra tools.

Since the only thing we can do with tagless-final terms is to interpret them, the CPS transformer is written in the form of an interpreter. It interprets source terms yielding transformed terms, which can be interpreted in many ways. In particular, the terms can be interpreted by the CPS transformer again, yielding 2-CPS terms. CPS transformers are composable, as expected.

A particular complication of the CPS transform is that the type of the result is different from the type of the source term: the CPS transform translates not only terms but also types. Moreover, base types and the arrow type are translated in different ways. To express CPS, we need an interpreter that gives the meaning not only to terms but also to types. In particular, what the function types denote should be up to a particular interpreter. It turns out the existing tagless-final framework is up to that task: with the help of type families, we can after all define an instance of Symantics that interprets source types as CPS-transformed types.

The ordinary (Fischer or Plotkin) CPS transform introduces many administrative redices, which make the result too hard to read. Danvy and Filinski proposed a one-pass CPS transform, which relies on the metalanguage to get rid of the administrative redices. The one-pass CPS transform can be regarded as an example of the normalization-by-evaluation.

CPS.hs [10K]
Ordinary and one-pass CPS transforms and their compositions

TTFdBHO.hs [2K]
The simplest tagless-final transformer, from the De-Bruijn--based Symantics to the one based on the higher-order abstract syntax

Olivier Danvy and Andrzej Filinski. Representing Control: A Study of the CPS Transformation.
Mathematical Structures in Computer Science, 1992.

 

Type-directed partial evaluation

Olivier Danvy's original POPL96 paper on type-directed partial evaluation used an untyped target language, represented as an algebraic data type. Type preservation was not apparent and had to be proved. In our presentation, the result of reification is a typed expression, in the tagless-final form. Type preservation of reification is now syntactically apparent and is verified by the Haskell type-checker. In the tagless-final presentation, reification and reflection seem particularly symmetric, elegant and insightful.

TDPE.hs [6K]
Tagless-final presentation of type-directed partial evaluation
ToTDPE.hs [<1K]
The imported module with sample functions to reify. Compiling this module makes for a nicer example.

< http://www.brics.dk/~danvy/tdpe-ln.pdf >
Olivier Danvy: Lecture notes on type-directed partial evaluation. The lecture notes are based on his POPL96 paper.

 

Parameterizing expressions by the evaluation order: call-by-name, call-by-value, call-by-need

We demonstrate a tagless-final embedding of the simply-typed lambda-calculus with integers and constants, and its uniform interpretations with three different evaluation strategies: call-by-name, call-by-value and call-by-need. The three interpreters share almost all the code, differing only in the interpretation of object-language abstractions (lam-forms). The semantics of abstraction is indeed what sets the three evaluation orders apart. The examples also implement tracing of the interpreter actions, to make clear which expressions are evaluated, in which sequence and how many times. The interpreters are assuredly and patently type-preserving: the host language -- OCaml and Haskell -- guarantee that. We thus obtain a higher-order embedded domain-specific language with the selectable evaluation order.

In Haskell, we define the DSL as a type class Symantics. The type class declaration specifies the syntax and its instances define the semantics of the language. Our DSL is typed. We use the Haskell type constant Int to also denote the DSL type of integers (although we could have used a different name). DSL function types are represented in Haskell by the type synonym Arr repr a b:

     class Symantics repr where
          int :: Int -> repr Int                          -- integer literals
          add :: repr Int -> repr Int -> repr Int         -- addition 
          sub :: repr Int -> repr Int -> repr Int         --  and subtraction
     
          app :: repr (Arr repr a b) -> repr a -> repr b  -- application
     
     type Arr repr a b = repr a -> repr b
This somewhat unusual representation of DSL functions permits the uniform implementation of evaluation strategies. The class Semantics specifies the part of our calculus that is interpreted the same way in all evaluation strategies. Only DSL abstractions are interpreted differently; therefore, we make them a distinct DSL `feature', defined by its own type class:
     class SymLam repr where
          lam :: (repr a -> repr b) -> repr (Arr repr a b)
After introducing the convenient `macro' let_ (which could have been called `bind') we write a sample object language term as follows:
     let_ :: (Symantics repr, SymLam repr) => 
             repr a -> (repr a -> repr b) -> repr b
     let_ x y = (lam y) `app` x
     
     -- The representation of the lambda-calculus term
     -- (\z x -> let y = x + x in y + y) (100 - 10) (5 + 5) 
     t2 :: (Symantics repr, SymLam repr) => repr Int
     t2 = (lam $ \z -> lam $ \x -> let_ (x `add` x) $ \y -> y `add` y)
           `app` (int 100 `sub` int 10)
           `app` (int 5 `add` int 5)
The Haskell expression t2 represents the lambda-calculus term (\z x -> let y = x + x in y + y) (100 - 10) (5 + 5); the variable z does not occur in the body of the abstraction whereas x occurs twice.

We are ready to write interpreters for our DSLs. We will interpret DSL expressions of the type a as Haskell values of the type m a where m is some MonadIO. IO is needed to print the trace of interpreter actions. The representation is common to the three evaluation strategies. To distinguish them (when interpreting lam later on) we add a phantom type parameter, the label l. It will take the values Name, Value and Lazy; more strategies may be added later on.

     newtype S l m a = S { unS :: m a }
                     deriving (Monad, Applicative, Functor, MonadIO)

Here is the code common to all interpreters regardless of the evaluation strategy. The reasons to parameterize the interpreter over MonadIO is to print out the evaluation trace, so that we can see the difference among the evaluation strategies in the number of performed additions and subtractions.

     instance MonadIO m => Symantics (S l m) where
       int = return
       add x y = do a <- x
                    b <- y
                    liftIO $ putStrLn "Adding"
                    return (a + b)
       sub x y = do a <- x
                    b <- y
                    liftIO $ putStrLn "Subtracting"
                    return (a - b)
       app x y = x >>= ($ y)

Call-by-name is defined by the following interpretation of DSL abstractions:

     data Name                               -- The label
     
     instance Monad m => SymLam (S Name m) where
       lam f = return f
The abstraction, when applied, will receive an unevaluated argument expression -- of the type repr a, or S Name m a -- using (substituting) it as it is. We evaluate the sample term under call-by-name
     runName :: S Name m a -> m a
     runName x = unS x
     
     t2SN = runName t2 >>= print
obtaining the result 40 and observing from the trace that subtraction was not performed: the value of int 100 `sub` int 10 was not needed to compute the result of t2. On the other hand, the sub-expression int 5 `add` int 5 was evaluated four times.

Call-by-value differs from call-by-name only in the interpretation of abstractions:

     data Value
     instance Monad m => SymLam (S Value m) where
        -- lam f = return (\x -> (f . return) =<< x)
        -- or, in the pointless notation, as below:
        lam f = return (f . return =<<)
As in the call-by-name evaluator, the lambda-abstraction receives an unevaluated argument expression -- evaluating it before passing its result to the abstraction body f. This is literally the definition of call-by-value. The very same sample term t2 can be interpreted differently:
     runValue :: S Value m a -> m a
     runValue x = unS x
     t2SV = runValue t2 >>= print
giving in the end the same result 40. Although the result of the subtraction int 100 `sub` int 10 in t2 was not needed, the trace shows it performed. On the other hand, the argument sub-expression int 5 `add` int 5 was evaluated only once. In call-by-value, arguments are evaluated exactly once.

The call-by-need evaluator differs from the others again in one line, the interpretation of abstractions:

     data Lazy
     instance MonadIO m => SymLam (S Lazy m) where
        -- lam f = return (\x -> f =<< share x)
        -- Or, in the pointless notation
        lam f = return ((f =<<) . share)
The evaluation of the body of the abstraction always starts by lazy sharing the argument expression. (The implementation of share is straightforward; see the code for details.) Again, this is the definition of call-by-need. We run the very same term t2 with the new evaluator, obtaining the same result 40 and observing from the execution trace that subtraction was not evaluated (because it was not needed) but the needed argument expression int 5 `add` int 5 was evaluated once. In call by need, application arguments are evaluated at most once.
Version
The current version is 1.2, March 17, 2010.
References
CBAny.hs [7K]
The complete code with examples and tests

call_by_any.ml [6K]
The OCaml version of the code, using modules and functors

CB.hs [7K]
The older version of the code
It was originally posted as CBN, CBV, Lazy in the same final tagless framework on the Haskell-Cafe mailing list on Thu, 8 Oct 2009 00:54:14 -0700 (PDT).

CB98.hs [7K]
The same code in Haskell98, proving that the tagless final approach indeed requires fewer fancy type system features. The fancy features like the type families, used in CB.hs, add convenience and gloss. One can do without them however. The code can easily be re-written in OCaml, which has no typeclasses, type families and other bleeding-edge features.

Parameterized syntax: interpreters without run-time interpretive overhead
Evaluating the same Scheme source language expression using call-by-value, call-by-reference, and call-by-name evaluation strategies.

 

A surprisingly simple implementation of staging

We present the unexpectedly straightforward embedding of the staged language that may be regarded as a simple version of MetaOCaml. In fact, MetaOCaml quotations and splices can be translated to our language in the most direct way. The language supports open quotation (allowing free variables in fragments of the generated code) and cross-stage persistence.

The staged language is embedded into OCaml in the tagless-final style, as the signature Code and its three implementations: CodeReal (in terms of the ordinary OCaml), CodeSting (performing code generation via strings), and CodeCode (in terms of MetaOCaml). The CodeReal implementation -- of the staged calculus is terms of the ordinary, unstaged OCaml -- embodies an unstaging translation. It is orders of magnitude simpler compared to the other such translations described in literature, requiring no fancy types or type-level operations. The staged language can be extended to support the polymorphic let.

Version
The current version is February 2017.
References
let-insertion.pdf [225K]
Generating Code with Polymorphic let: A Ballad of Value Restriction, Copying and Sharing
Electronic Proceedings in Theoretical Computer Science (EPTCS) 241, 2017, pp. 1-22 (Post-Proceedings ML/OCaml 2015) DOI: 10.4204/EPTCS.241.1
Sec 3.1 of the paper describes the staged calculus in form of code-generating combinators, and the translation of MetaOCaml quotations and splices into it. The section formulates the correctness properties of the translation.

polylet.ml [17K]
The OCaml code for the embedding of the staged language into OCaml, and a few examples

Closing the Stage: From Staged Code to Typed Closures
The previous, immensely more complex unstaging translation

Tagless-Staged: a step toward MetaHaskell
Type- and Scope-Safe Code Generation with Mutable Cells
The tagless-final embedding, in Haskell and OCaml, of a staged calculus with a rather complicated, MLF-like type system. The type system ensures scope safety of the generated code even in the presence of effects.

 

Linear and affine lambda-calculi

One may think that only those DSL can be embedded in Haskell whose type system is a subset of that of Haskell. To counter that impression we show how to faithfully embed typed linear lambda calculus. Any bound variable must be referenced exactly once in abstraction's body. As before, only well-typed and well-formed terms are representable. Haskell as the metalanguage will statically reject as ill-typed the attempts to represent terms with a bound variable referenced several times -- or, as in the K combinator, never.

We build on the embedding of the ordinary simply typed lambda calculus with De Bruijn indices described earlier. An object term of the type a was represented as a value of the type repr h a, where the binary type constructor repr is a member of the class Symantics. Here h stands for the type environment, assigning types to free variables (`hypotheses') of a term. Linear lambda calculus regards bound variables as representing resources; referencing a variable consumes the resource. We use the type environment for tracking the state of resources: available or consumed. The type environment becomes the type state. We follow the approach described in Variable (type)state `monad'.

We represent linear lambda terms by Haskell values of the type repr hi ho a, where hi stands for the variable state before evaluating the term and ho represents the state after evaluating the term. To be more precise, hi and ho are the types of the variable states. We can determine the types and hence the state of the variables statically: As usual, the type checker does abstract interpretation. In our tagless-final encoding, lam has the following type

     lam :: repr (F a,hi) (U,ho) b  -> repr hi ho (a->b)
The expression (lam e) has the type repr hi ho (a->b) provided the body of abstraction, e, has the type repr (F a,hi) (U,ho) b. That is, in the environment extended with a term of the type a, the body must produce the value of type b. The body must consume the term at the top of the environment, changing the type of the first environment cell from F a to U (the type of the used variable).

A trivial modification turns the embedding of the linear lambda-calculus into that of the affine lambda-calculus, which allows to ignore bound variables. K combinator becomes expressible.

LinearLC.hs [11K]
Commented code defining the typed linear lambda calculus and its two interpreters, to evaluate and to show linear lambda terms. Later we add general abstractions imposing no constraints on the use of bound variables.

Jeff Polakow: Embedding a full linear Lambda calculus in Haskell
Proceedings of the ACM SIGPLAN Haskell Symposium, 2015, pp. 177-188
Polakow's tagless-final linear lambda-calculus interpreter relies on higher-order abstract syntax, rather than De Bruijn indices of LinearLC.hs. He implements the full linear lambda calculus with additives and units.

 

Math layout

Jacques Carette has posed a problem of designing an embedded DSL for Math layout, with subscripts, superscripts, tables, fractions, and other standard features. The language has to support an extensible set of glyphs, such as Greek letters and special characters. It should be possible to add more glyphs later, similar to `using' stmaryrd, etc. in LaTeX. Adding a new glyph should not affect the existing documents in any way. Another requirement is an extensible set of renderers (formatters), such as plain text, LaTeX, HTML+MathML, etc. One should be able to add a new formatter later, without having to re-write or even recompile the existing documents. A formatter does not have to support all layout features or glyphs. However, an attempt to render a document with an unsuitable formatter should be flagged as a type error.

The tagless-final approach seems to easily fulfill all the requirements, as the enclosed code demonstrates.

Version
The current version is May 2016.
References
MathLayout.hs [5K]
Haskell code outlining the embedding of the layout language, with LaTeX and HTML backends

 

Lambek calculi

Lambek calculus is a resource-sensitive calculus introduced by Lambek in 1958, almost three decades before linear logic. Like linear logic, Lambek calculus does not have the weakening rule. In fact, in the non-associative Lambek calculus NL, the antecedent of a sequent is a tree and there are no structural rules at all. Lambek calculus is hence the purest, simplest, and the earliest substructural logic. Adding the associativity and commutativity rules (that is, treating the antecedent as a sequence rather than a tree, and allowing exchange) turns Lambek calculus into a fragment of the Multiplicative Linear Logic (MILL).

The most noticeable difference of Lambek calculus from the conventional or linear lambda-calculus is its directional implications and abstractions. There are two function types typically written as B / A and A \ B, called left/right slash-types rather than arrow types. The function of the type B / A accepts the argument of the type A on the right; A \ B accepts the A argument on the left. There are hence two rules for eliminating implications and, correspondingly, two rules for introducing them, which bring in the power of hypothetical reasoning. Although the NL calculus per se has no structural rules, various NL theories add so-called structural postulates: the ways to rearrange the antecedent structure in particular limited ways.

All these features set Lambek calculi even farther apart from the Haskell type system. And yet it can be embedded in Haskell, in the tagless-final approach. All and only valid derivations are represented by Haskell values of a particular repr type. One tagless-final interpreter prints the ``yield'', the constants used in the derivation. In linguistic applications the yield spells out the sentence whose parse is represented by the derivation. Other interpreters transform the derivation to a logical formula that stands for the meaning of the sentence.

A so-called symmetric, Lambek-Grishin calculus has, in addition to directional implications, directional co-implications and interesting symmetric structural rules for moving between the antecedent and the consequent structures. It too can be represented in the tagless-final style. The semantic interpretation builds the meaning formula in the continuation-passing style.

References
Michael Moortgat: Typelogical Grammar
The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta (ed.)
< http://plato.stanford.edu/entries/typelogical-grammar/ >

HOCCG.hs [28K]
Explanation of the code
Non-associative Lambek calculus NL with the non-standard semantic interpretation. Applications to quantification, non-canonical coordination and gapping.

LG.hs [10K]
Tagless-final embedding of the Lambek-Grishin symmetric calculus and its 1-CPS translation
Our starting point is the regular CBV CPS translation for lambda-LG, described on p. 697 of the paper by Michael Moortgat ``Symmetric Categorial Grammar''. J. Philos. Logic, 2009. The original translation (eq (20) of the paper) yields many administrative beta redices. The present translation uses lightweight staging to remove such redices in the process.

 

Mobile Code in C# via Finally Tagless Interpreters

Sandro Magi shows that different interpretations of the same DSL term may not only involve different run-time systems but also occur on different hosts.
Version
The current version is June 23, 2009.
References
< http://higherlogics.blogspot.com/2009/06/mobile-code-in-c-via-finally-tagless.html >
The article by Sandro Magi


Last updated April 5, 2017

This site's top page is http://okmij.org/ftp/

oleg-at-okmij.org
Your comments, problem reports, questions are very welcome!

Converted from HSXML by HSXML->HTML