This page collects examples and sample DSLs implemented
in the typed final (``tagless-final'') style.

- A surprisingly simple implementation of staging
- lambda to SKI: a different view on the old translation
- Expressing sharing
- Effects Without Monads: Non-determinism -- embedding effectful DSLs
- Reconciling Abstraction with High Performance: A MetaOCaml approach
- Tagless-final operational semantics
- Ordinary and one-pass CPS transformation
- Type-directed partial evaluation
- Parameterizing expressions by the evaluation order: call-by-name, call-by-value, call-by-need
- Type-safe Formatted IO (printf and scanf) as an embedded DSL: the final view
- Tagless-Staged: a step toward MetaHaskell
- Type- and Scope-Safe Code Generation with Mutable Cells: A tagless-final OCaml embedding of a staged calculus with a rather complicated, MLF-like type system
- Math layout
- Linear and affine lambda-calculi
- Lambda: the ultimate syntax-semantics interface
- Lambek calculi
- Transformational Semantics (TS): Gradually Transforming Syntax to Semantics
- Mobile Code in C# via Finally Tagless Interpreters

- Taking the call-by-value untyped lambda-calculus with
De-Bruijn--indexed variables, integers and integer operations as an
example, we embed it in OCaml and demonstrate several
interpreters. Notable among them are two evaluators/normalizers, which
turn a lambda-calculus term into the corresponding value term (weak
normal form), if it exists. One evaluator is `denotational', based
on normalization-by-evaluation. The other is a reducer.
The reducer normalizes terms by successive rewriting -- finding a redex and reducing it -- until it either gets stuck or produces a value. The reducer is instrumented to count the number of reduction steps. The reducer is written to be as close as possible to the familiar big-step or structural operational semantics of the calculus. Therefore, its count of reduction steps can be trusted to represent the complexity of term reduction.

The tagless-final style is designed around a compositional mapping of terms of the embedded DSL to the values in some

`repr`

domain -- i.e., denotational semantics. Operational semantics in the tagless-final style did not seem possible all. After all, to find a redex we have to pattern-match on a term. Performing an application, i.e., substituting into abstraction's body, calls for more pattern-matching. Tagless-final terms are not represented by a data type (AST); how do we pattern-match on them? This article demonstrates, explaining how to search for a redex and substitute in the tagless-final style. Operational semantics turns out not only possible but also relatively straightforward and even profitable (e.g., letting us short-circuit the substitution when the variable in question does not appear in the term).Our calculus is the untyped call-by-value lambda-calculus with De-Bruijn--indexed variables, integers, and operations to increment an integer and to branch on zero. It can be presented as the following OCaml signature:

module type Lambda = sig type repr val int : int -> repr (* integer literal *) val vz : repr (* variable of the De Bruijn index 0 *) val vs : repr -> repr (* next De Bruijn index (weakening) *) val lam : repr -> repr (* abstraction *) val (/) : repr -> repr -> repr (* application *) (* Constants *) val inc : repr (* successor *) val ifz : repr (* branch on zero *) end

The signature can be read as the context-free grammar of the language:`repr`

is the start symbol and`lam`

,`inc`

, etc. are the terminals. In the conventional CFG notation it would look as:repr -> "vz" repr -> "vs" repr repr -> "lam" repr ...

This grammar derives, for example,`lam vz`

-- the identity combinator.OCaml programmers might look at

`Lambda`

as defining a set of OCaml functions to construct terms of the embedded calculus. All such terms are represented as OCaml values of the abstract type`repr`

. Thus`vz`

is the term for the 0-index free variable, and`lam`

is the OCaml function that constructs the lambda-abstraction term given the term for the abstraction's body. Thus`lam vz`

is the OCaml expression that evaluates to the value of the type`repr`

that represents the identity combinator. Since we use OCaml expressions to construct lambda-calculus terms, we follow OCaml syntactic conventions regarding fixity, parentheses, application operation associativity and precedence, etc. For example,`lam (lam (vs vz))`

represents the K combinator. We can write this expression without parentheses, using`@@`

-- the right-associative OCaml application operator of low precedence -- as`lam @@ lam @@ vs vz`

. The second Church numeral looks as`lam @@ lam @@ vs vz / (vs vz / vz)`

(in OCaml, slash is the infix operator).To specify the meaning of the

`vz`

,`lam`

, etc. operations, is to implement the`Lambda`

signature. An implementation, such as`NativeEval`

below, tells what`repr`

is, concretely -- and how exactly`vz`

,`lam`

etc. operations construct the`repr`

values that represent the free-variable, the abstraction, etc. terms. In other words, an implementation of`Lambda`

tells how to compute the meaning, or, the`repr`

value, of each term from the`repr`

values of its immediate subterms. An implementation of`Lambda`

thus specifies a denotational semantics of the calculus.Specifically,

`NativeEval`

maps closed lambda-terms either to OCaml integers or OCaml functions. The calculus is untyped; the semantics domain is therefore the disjoint union`v`

below. We also have to account for open terms, and hence introduce the environment: a list of`v`

values that are assumed as meanings for free variables. The list is ordered by the De Bruijn index. Thus the term meaning`repr`

is a*partial*function from the environment to`v`

:module NativeEval = struct [@@@warning "-8"] (* annotation: partial pattern-match is intentional, don't warn *) type v = (* the value domain *) | I of int | A of (v->v) (* generally, a partial function *) type repr = v list -> v (* also a partial function *) let int x = fun _ -> I x let vz = fun (h::_) -> h let vs e = fun (_::t) -> e t let lam e = fun env -> A (fun x -> e (x::env)) let (/) e1 e2 = fun env -> match e1 env with A f -> f (e2 env) let inc = fun _ -> A (function I x -> I (x+1)) (* partial *) let ifz = fun _ -> A (function (* partial *) | I 0 -> A (function A f -> A (fun _ -> f (I 0))) | I n when n > 0 -> A (fun _ -> A (function A f -> f (I (n-1))))) end

It should now be clear what`ifz`

means:`ifz e`

is a function of two arguments, to be called z-continuation and p-continuation. If`e`

means zero,`ifz e z-cont p-cont`

means`z-cont 0`

. Otherwise, if`e`

is a positive integer`n`

, the the p-continuation is invoked with`n-1`

as the argument.`NativeEval`

is but one implementation of`Lambda`

. Another implementation, called`Show`

, maps each term to a string, the pretty-printed representation of the term. The reader may want to write this implementation on their own (and then compare to the one in the accompanying code).A lambda-term, say,

`lam vz`

, in the`NativeEval`

interpretation means the identity function on`NativeEval.v`

. According to the`Show`

implementation, however, the same term means the string`"Lx1. x1"`

. One may ask: what does the term`lam vz`

mean by itself? A glib answer is that a term embedded in the tagless-final style means whatever its interpretation makes it to mean. Technically, an embedded term may be regarded as a function that takes an interpretation of its syntactic forms (i.e., an implementation of its signature) and returns the`repr`

value produced by that implementation. To say it in OCaml:type lambda = {e: 'a. (module Lambda with type repr = 'a) -> 'a}

Different implementations of`Lambda`

generally choose different concrete types for`repr`

-- therefore the type`lambda`

stands for a polymorphic function: there is a implicit`forall`

before`'a.`

, quantifying that type variable. Moreover,`lambda`

is the first-class polymorphic function: we do want to pass terms as arguments to other functions. In OCaml, first-class polymorphic values have to be embedded in a record (in our case, the record with the field`e`

). Since we chose modules for tagless-final signatures and implementations,`lambda`

's argument is a value of the module type: OCaml has first-class modules. We could just as well have used OCaml records or objects -- and we will use something like that in the follow-up article. The modules are more convenient and we use them for now.Although

`lambda`

is the glib technical answer to the question about the inherent meaning of embedded terms, it could be taken seriously. In fact, we may define a denotational semantics in which our lambda-calculus terms indeed mean`lambda`

. It is the most banal -- one may say, `syntactic' -- implementation of`Lambda`

. We show only the representative part; the reader may want to finish as an exercise.module LambdaSelf : (Lambda with type repr = lambda) = struct type repr = lambda let vz = {e = fun (type a) (module L : Lambda with type repr = a) -> L.vz} let vs e = {e = fun (type a) ((module L : Lambda with type repr = a) as l)-> L.vs (e.e l)} let lam e = {e = fun (type a) ((module L : Lambda with type repr = a) as l)-> L.lam (e.e l)} ... end

One can now say what`lam vz`

means `by itself'. It is a function that takes an interpreter and tells it how to compose that term's own denotation: first find the meaning of`vz`

and then use the interpretation of`lam`

to build the complete denotation. The `intrinsic' meaning of an embedded term is, hence, the guiding of its interpretation according to its structure. This is what*syntax*means, doesn't it? (The reason the above implementation is called`LambdaSelf`

will become clear in the follow-up article).Banal as it may be,

`LambdaSelf`

is insightful, and also practically useful. First of all, it lets us easily write terms and have them interpreted. Here a few sample terms:open LambdaSelf let ci = lam vz let ck = lam @@ lam @@ vs vz (* the K combinator *) let c2 = lam @@ lam @@ vs vz / (vs vz / vz) (* second Church numeral *) let et3 = lam @@ inc / (inc / (vz / int 1)) (* higher-order, numeric *) let et3a = et3 / (lam @@ inc / (inc / (inc / vz))) let delta : lambda = lam @@ vz / vz let lomega : lambda = lam @@ delta / delta

They all have the type`lambda`

, which is inferred in all cases (although we often prefer to write the type explicitly as an annotation, to make sure we define a proper embedded term). We use OCaml definitions to give names to embedded terms; moreover, we can use the names to incorporate already defined terms into new ones (see`et3a`

and`lomega`

).A bit more complex is the the applicative fixpoint combinator,

let fixa : lambda = lam (vz / (lam @@ delta / (lam @@ vs (vs vz) / (lam (delta / vs vz / vz))) / vz))

using which we define the recursive function to add two numbers, by induction on the second summand:let add : lambda = lam @@ fixa / (lam @@ lam @@ let y = vz and self = vs vz and x = vs (vs vz) in ifz / y / (lam @@ vs x) / (lam @@ inc / (vs self / vz)))

We take another advantage of OCaml let-bindings, to give meaningful names to De Bruijn indices.The

`lambda`

terms are also straightforward to evaluate:let eval : lambda -> NativeEval.v = fun {e} -> e (module NativeEval) []

The evaluator takes a (closed)`lambda`

term, passes it the`NativeEval`

implementation as the first-class module, and applies the resulting`repr`

value to the empty environment. Thus we may tryeval Sample.et3a (* NativeEval.I 6 : NativeEval.v *) eval LambdaSelf.(add / int 1 / int 4) (* NativeEval.I 5 : NativeEval.v *)

(The evaluation result is shown in the comments). Likewise, we define`show : lambda -> string`

, so that`show c2`

gives`"Lx1. Lx2. x1 (x1 x2)"`

. When printing terms, we use concrete variable names for readability.The second part of the present article is about the evaluators that turn a closed lambda-calculus term into the corresponding value term (the weak normal form). If the weak normal form does not exists, they diverge. They differ from the earlier

`eval : lambda -> NativeEval.v`

and`show : lambda -> string`

in that they map a lambda-term to another lambda-term. Thus the two evaluators to be described below both have the type`lambda -> lambda`

. The identity function also has this type; the evaluators, however, always return a lambda-term that is a value. A value is a closed term of the specific shape: either`int x`

for some integer`x`

or`lam e`

for some`e`

.Our first evaluator is `denotational' and hence easily coded in the tagless-final style. In fact, we have already written most of its code:

`NativeEval`

, which maps a`lambda`

term to its`NativeEval.v`

meaning. If only we could turn this meaning to a value lambda-term... Let's try:let rec dyn : NativeEval.v -> lambda = function | NativeEval.I x -> LambdaSelf.int x | NativeEval.A f -> LambdaSelf.lam (dyn (f ???))

Since`f`

above is a`v -> v`

function, we have to apply it to something, in such a way that the result is easily convertible to`lambda`

representing the body of the abstraction. The easy solution is to extend the`NativeEval.v`

data type with another variant, for the term being reconstructed:type v = | I of int | A of (v->v) | T of lambda

Thenlet rec dyn : v -> lambda = function | I x -> LambdaSelf.int x | T e -> e | A f -> LambdaSelf.lam (dyn (f (T LambdaSelf.vz)))

hoping that the function`f`

, upon receiving`T LambdaSelf.vz`

, will produce`T e`

where`e`

is the term that corresponds to the abstraction's body. For example,`lam vz`

naturally means the (tagged) OCaml identity function:`A (fun x -> x)`

; applying`dyn`

gives us`LambdaSelf.lam LambdaSelf.vz : lambda`

as desired. We now have to adjust`NativeEval`

to produce the hoped for denotations.There is a slight problem however: we naturally wish

`int 1`

to be interpreted as`I 1`

-- but not when it appears under`lam`

, as in`lam (int 1)`

. In the latter case, it ought to be interpreted as`LambdaSelf.int 1`

. Luckily, we already have the solution to this context-sensitivity problem. To account for open terms, the meaning`repr`

in`NativeEval`

is a function from the environment to`v`

. The environment is a list of`v`

values to use when substituting for free variables. Normally, these values are tagged either with`I`

or`A`

. However, the`dyn`

function above will also pass`T LambdaSelf.vz`

to request the result as a`T`

-tagged`lambda`

term (the `read-out'). We can, hence, easily tell if the current context is under lambda: the environment contains at least one`T`

-tagged value at the top. (A little exercise: can we say that the environment is either a list of`I`

and`A`

values, or entirely`T`

-tagged values?)module NBE = struct [@@@warning "-8"] (* partial pattern-match is intentional *) type v = (* the value domain *) | I of int | A of (v->v) | T of lambda type repr = v list -> v let rec dyn : v -> lambda = function | I x -> LambdaSelf.int x | T e -> e | A f -> LambdaSelf.(lam (dyn (f (T vz)))) let int x = function | T _::_ -> T (LambdaSelf.int x) (* readout in progress *) | _ -> I x let vz = fun (h::_) -> h (* Substitution under lam is OK *) let vs e = fun (_::t) -> match e t with | T e -> T LambdaSelf.(vs e) | e -> e let lam e = function | T _::_ as env -> (* readout in progress *) T LambdaSelf.(lam (dyn (e (T vz::env)))) | env -> A (fun x -> e (x::env)) let (/) e1 e2 = function | T _::_ as env -> T LambdaSelf.(dyn (e1 env) / dyn (e2 env)) | env -> match e1 env with A f -> f (e2 env) let inc = function | T _::_ -> T LambdaSelf.inc | _ -> A (function | I x -> I (x+1) | T e -> T LambdaSelf.(inc / e)) let ifz = (* exercise *) end

Most denotations have two cases: one for read-out (for building terms) and the other as in`NativeEval`

. Only`vz`

has the single interpretation: the substitution is performed uniformly in any context.The denotational evaluator is thus:

let neval : lambda -> (NBE.v list -> NBE.v) = fun {e} -> e (module NBE) let nbe : lambda -> lambda = fun e -> NBE.dyn @@ neval e [] let nbeshow : lambda -> string = fun e -> show @@ nbe e

For example,`nbeshow lomega`

produces the same string as`show lomega`

: lambda-abstractions always evaluate to themselves, even if there are redices inside. Not surprisingly,`nbeshow (add / int 1 / int 4)`

yields`"5"`

, which is the pretty-printed representation of`int 5`

. Now, however, we can also`nbeshow (add / int 1)`

. For sure, we could`eval (add / int 1)`

with`NativeEval`

. The result, however, is a (`A`

-tagged) OCaml function, which is opaque. The new`nbeshow`

lets us see the resulting lambda-term:"Lx1. ifz x1 (Lx2. 1) (Lx2. S ((Lx3. (Lx4. x4 x4) (Lx4. (Lx5. Lx6. ifz x6 (Lx7. 1) (Lx7. S (x5 x7))) (Lx5. (Lx6. x6 x6) x4 x5)) x3) x2))"

Still, the result of`nbe (add / int 1)`

agrees with that of the`NativeEval`

, the definitional interpreter. To quickly check:let e = LambdaSelf.(add / int 1) in let A f1 = eval e in let A f2 = eval (nbe e) in List.for_all (fun n -> f1 (NativeEval.I n) = f2 (NativeEval.I n)) [0;1;2;3;4;100]

produces`true`

:`f1`

and`f2`

agree, at least on the given sample inputs.One may easily observe from the

`NBE`

code that, when restricted to`v`

values without`T`

-tags, it behaves just like`NativeEval`

. In particular, for any closed term`e`

,`neval e []`

, if terminates, is either`I x`

for some integer`x`

or`A f`

for some`v->v`

function. One may further observe, with a slightly bit more effort, that if`v`

is the result of`neval e []`

for some closed term`e`

, then`neval (dyn v) []`

is`v`

, extensionally. (The extra effort is generalizing this statement to open terms). That is,`dyn`

is the left-inverse for the`NBE`

interpretation. The function like`dyn`

is typically called `reify' in the normalization-by-evaluation literature. Putting these observations together we obtain that for any closed term`e`

,`nbe e`

is a value term andeval e === eval (nbe e)

where`===`

means that either two sides both diverge, or they are the same integer or the same, as a mapping, OCaml function (that is,`===`

is the extensional equality). That means`nbe`

is meaning-preserving and it does yield a value: it is the correct evaluator.Incidentally, normalization-by-evaluation is a technique for normalizing terms, or

*partially evaluating*them. A partial evaluator can perform reductions even under lambda. Although this often improves efficiency when the lambda-term is subsequently applied, in this article we keep close to the operational semantics and avoid any extra reductions. (It is also worth contemplating what may happen when partially evaluating a term like`lomega`

.)Finally we come to the reducer and the operational semantics. The familiar Plotkin's structural operational semantics (SOS) is defined by the following familiar reduction rules, adjusted to our notation:

------- (where v is a value: either lam e or int x) v --> v ------------------------ (beta) (lam e) / v --> e{vz:=v} ------------------------- (inc) inc / int x --> int (x+1) ----------------------------------------- (if0) ifz / int 0 --> lam (lam (vs vz / int 0)) ------------------------------------------ (ifn) (n>0) ifz / int n --> lam (lam (vz / int (n-1))) e1 --> e1' -------------------- (e1 is not a value) e1 / e2 --> e1' / e2 e2 --> e2' ------------------ (e2 is not a value) v / e2 --> v / e2'

where`e{vz:=v}`

stands for the substitution of the value`v`

for the free variable`vz`

in`e`

. The rules are all disjoint; the last two are the compatibility rules, implicitly defining the evaluation contexts. The reducer`sos`

below applies the rules until it either obtains a value, or no rule is applicable (that is, it got stuck). That is,`sos`

computes the transitive closure of`-->`

.Incidentally, the operational semantics may also be specified in the big-step, `natural' style, as the

`==>`

relation of a term and the corresponding value:------- v ==> v e1 ==> lam e1' e2 ==> v2 e1'{vz:=v2} ==> v ----------------------------------------------- e1 / e2 ==> v

(the rules for`inc`

and`ifz`

are left as an exercise).Whatever style of the operational semantics we choose, we have to analyze the shape of a term -- is it an application, is it a lambda-term, and if so, what is its body, etc. -- and substitute for a free variable in a term. It might be surprising to some that this classification task can be formulated as a denotational semantics. The semantic domain is a pair, of the term itself (so that we may return it, should it turns out a value) and its kind; see

`repr`

below. The kind tells if the term is an application (and if so, its applicands) or a value. In the latter case, is it an integer value or a functional value, that is, something that takes an argument and either substitutes it or does something fancier. Since`repr`

denotations are built bottom-up, one has to account for another possibility: if a term is open. The classification of an open term depends on what its free variables mean. An open term is a term with free variables to substitute. We also keep track of the depth of an open term, the largest De Bruijn index of its free variables (which is the length of the needed environment).module Classifier = struct [@@@warning "-8"] (* partial pattern-match is intentional *) type repr = {term: lambda; (* term as it is *) kind: kind} (* and its kind *) and kind = | App of repr * repr (* subject to reductions *) (* Expressions of the kind Int and Applicable are values *) | Int of int | Applicable of (repr -> repr) (* can be applied to an argument *) (* All other are closed. The first argument is the depth: the number of lambdas needed to close the expression *) | Open of int * (repr list -> repr) let int x = {term = LambdaSelf.int x; kind = Int x} let vz = {term = LambdaSelf.vz; kind = Open (1,fun (h::_) -> h)} let rec vs e = match e.kind with | Open (n,e') -> {term = LambdaSelf.vs e.term; kind = Open (n+1,fun (_::t) -> vs (e' t))} (* vs may end up being applied to a closed value after the substitution. In that case, it does nothing. *) | _ -> e let rec lam e = {term = LambdaSelf.lam e.term; kind = match e.kind with | Open (1,t) -> Applicable (fun x -> t [x]) | Open (n,t) when n > 1 -> Open (n-1,fun env -> lam (t (vz::env))) | _ -> Applicable (fun x -> e) (* constant function *) } let[@warning "+8"] rec (/) e1 e2 = {term = LambdaSelf.(e1.term / e2.term); kind = match (e1.kind,e2.kind) with | (Open(n1,t1),Open(n2,t2)) -> Open(max n1 n2, fun env -> t1 env / t2 env) | (Open(n1,t1),_) -> Open(n1,fun env -> t1 env / e2) | (_,Open(n2,t2)) -> Open(n2,fun env -> e1 / t2 env) | _ -> App (e1,e2) } let inc = {term = LambdaSelf.inc; kind = Applicable (function {kind = Int x} -> int (x+1))} let ifz = (* exercise *) end

One may notice that as we build`repr`

bottom-up, we also build the function to substitute for the term's free variables. The substitution function is a bit smart: it avoids rebuilding or even looking into subterms that are known to be closed and hence have nothing to substitute for.It is worth pointing out how

`kind`

differs from the familiar abstract syntax tree for the lambda-calculus. First, we do not distinguish`inc`

and`ifz`

from`lam`

: either is a value that takes an argument and does a reduction with it. However, we do distinguish open terms.Once the classification is done, reduction is just following the rules. Here is the reducer for the SOS semantics:

let sos : lambda -> lambda = fun {e} -> let open Classifier in let rec step : repr -> repr = (* the --> reduction of SOS *) fun e -> match e.kind with | Applicable _ | Int _ -> e (* value *) | Open _ -> assert false (* cannot happen *) | App (e1,e2) -> match (e1.kind,e2.kind) with | (Applicable f,Int _) | (Applicable f,Applicable _) -> f e2 (* redex *) | (App _,_) -> step e1 / e2 (* compatibility rules *) | (_,App _) -> e1 / step e2 | _ -> failwith "stuck" in let rec loop : repr -> repr = (* transitive closure of step *) fun e -> match e.kind with | Applicable _ | Int _ -> e (* value *) | _ -> loop (step e) (* keep stepping *) in (loop (e (module Classifier))).term let sosshow : lambda -> string = fun e -> show @@ sos e

The`sos`

reducer can be used just like the earlier`nbe`

evaluator. They do produce identical results (see the accompanying code for that test, among many others).The naively implemented SOS reducer is inefficient: too many pattern-matches on the shape of the term. (Do you see that?) A simple modification turns it into the efficient, natural, big-step reducer -- which we suggest as an exercise. The accompanying code also makes the big-step reducer count the number of reductions.

We thus demonstrated (among others) two evaluators for the call-by-value lambda-calculus with constants. One follows the denotational approach and the other operational, reducing a term to a value by successively re-writing it according to the reduction rules. We can do operational semantics in the tagless-final style, after all. Curiously, even the operational approaches rely on denotational semantics.

The evaluators are written in the host language, OCaml. Next we lower evaluators to (into) the embedded language itself, showing off our calculus serving as a metalanguage to embed itself and evaluate the self-embedded expressions.

**Version**- The current version is December 2018, based on fragments written in 2011
**References**- reducer.ml [21K]

The complete code for all the interpreters, plus many examples and testsTagless-final with objects

A simpler example of the tagless-final DSL embedding, using objects (which are extensible records)

- We demonstrate the ordinary and the administrative-redex--less call-by-value
Continuation Passing Style (CPS) transformations that assuredly produce
well-typed terms and are
*patently*total.Our goal here is not to evaluate, view or serialize a tagless-final term, but to transform it to another one. The result is a tagless-final term, which we can interpret in multiple ways: evaluate, view, or transform again. It is natural to require the result of transforming a well-typed term be well-typed. In the tagless-final approach that requirement is satisfied automatically: after all, only well-typed terms are expressible. We impose a more stringent requirement that a transformation be total. In particular, the fact that the transformation handles all possible cases of the source terms must be patently, syntactically clear. The complete coverage must be so clear that the metalanguage compiler should be able to see that, without the aid of extra tools.

Since the only thing we can do with tagless-final terms is to interpret them, the CPS transformer is written in the form of an interpreter. It interprets source terms yielding transformed terms, which can be interpreted in many ways. In particular, the terms can be interpreted by the CPS transformer again, yielding 2-CPS terms. CPS transformers are composable, as expected.

A particular complication of the CPS transform is that the type of the result is different from the type of the source term: the CPS transform translates not only terms but also types. Moreover, base types and the arrow type are translated in different ways. To express CPS, we need an interpreter that gives the meaning not only to terms but also to types. In particular, what the function types denote should be up to a particular interpreter. It turns out the existing tagless-final framework is up to that task: with the help of type families, we can after all define an instance of Symantics that interprets source types as CPS-transformed types.

The ordinary (Fischer or Plotkin) CPS transform introduces many administrative redices, which make the result too hard to read. Danvy and Filinski proposed a one-pass CPS transform, which relies on the metalanguage to get rid of the administrative redices. The one-pass CPS transform can be regarded as an example of the normalization-by-evaluation.

CPS.hs [10K]

Ordinary and one-pass CPS transforms and their compositionsTTFdBHO.hs [2K]

The simplest tagless-final transformer, from the De-Bruijn--based Symantics to the one based on the higher-order abstract syntaxOlivier Danvy and Andrzej Filinski. Representing Control: A Study of the CPS Transformation.

Mathematical Structures in Computer Science, 1992.

- We present the unexpectedly straightforward embedding of the staged
language that may be regarded as a simple version of MetaOCaml. In
fact, MetaOCaml quotations and splices can be translated to our
language in the most direct way. The language supports open
quotation (allowing free variables in fragments of the generated code)
and cross-stage persistence.
The staged language is embedded into OCaml in the tagless-final style, as the signature

`Code`

and its three implementations:`CodeReal`

(in terms of the ordinary OCaml),`CodeSting`

(performing code generation via strings), and`CodeCode`

(in terms of MetaOCaml). The`CodeReal`

implementation -- of the staged calculus is terms of the ordinary, unstaged OCaml -- embodies an*unstaging translation*. It is orders of magnitude simpler compared to the other such translations described in literature, requiring no fancy types or type-level operations. The staged language can be extended to support the polymorphic`let`

. **Version**- The current version is February 2017
**References**- let-insertion.pdf [225K]

Generating Code with Polymorphic let: A Ballad of Value Restriction, Copying and Sharing

Electronic Proceedings in Theoretical Computer Science (EPTCS) 241, 2017, pp. 1-22 (Post-Proceedings ML/OCaml 2015) DOI: 10.4204/EPTCS.241.1

Sec 3.1 of the paper describes a staged calculus in the form of code-generating combinators, and the translation of MetaOCaml quotations and splices into it. The section formulates the correctness properties of the translation.polylet.ml [17K]

The OCaml code for the embedding of the staged language into OCaml, and a few examplesClosing the Stage: From Staged Code to Typed Closures

The previous, immensely more complex unstaging translationTagless-Staged: a step toward MetaHaskell

Type- and Scope-Safe Code Generation with Mutable Cells

The tagless-final embedding, in Haskell and OCaml, of a staged calculus with a rather complicated, MLF-like type system. The type system ensures scope safety of the generated code even in the presence of effects.

- One may think that only those DSL can be embedded in Haskell whose
type system is a subset of that of Haskell. To counter that impression
we show how to faithfully embed typed
*linear*lambda calculus. Any bound variable must be referenced exactly once in abstraction's body. As before, only well-typed and well-formed terms are representable. Haskell as the metalanguage will statically reject as ill-typed the attempts to represent terms with a bound variable referenced several times -- or, as in the K combinator, never.We build on the embedding of the ordinary simply typed lambda calculus with De Bruijn indices described earlier. An object term of the type

`a`

was represented as a value of the type`repr h a`

, where the binary type constructor`repr`

is a member of the class`Symantics`

. Here`h`

stands for the type environment, assigning types to free variables (`hypotheses') of a term. Linear lambda calculus regards bound variables as representing resources; referencing a variable consumes the resource. We use the type environment for tracking the state of resources: available or consumed. The type environment becomes the*type state*. We follow the approach described in Variable (type)state `monad'.We represent linear lambda terms by Haskell values of the type

`repr hi ho a`

, where`hi`

stands for the variable state before evaluating the term and`ho`

represents the state after evaluating the term. To be more precise,`hi`

and`ho`

are the types of the variable states. We can determine the types and hence the state of the variables statically: As usual, the type checker does abstract interpretation. In our tagless-final encoding,`lam`

has the following typelam :: repr (F a,hi) (U,ho) b -> repr hi ho (a->b)

The expression`(lam e)`

has the type`repr hi ho (a->b)`

provided the body of abstraction,`e`

, has the type`repr (F a,hi) (U,ho) b`

. That is, in the environment extended with a term of the type`a`

, the body must produce the value of type`b`

. The body must consume the term at the top of the environment, changing the type of the first environment cell from`F a`

to`U`

(the type of the used variable).A trivial modification turns the embedding of the linear lambda-calculus into that of the affine lambda-calculus, which allows to ignore bound variables. K combinator becomes expressible.

LinearLC.hs [11K]

Commented code defining the typed linear lambda calculus and its two interpreters, to evaluate and to show linear lambda terms. Later we add general abstractions imposing no constraints on the use of bound variables.Jeff Polakow: Embedding a full linear Lambda calculus in Haskell

Proceedings of the ACM SIGPLAN Haskell Symposium, 2015, pp. 177-188.

Polakow's tagless-final linear lambda-calculus interpreter relies on higher-order abstract syntax, rather than De Bruijn indices of LinearLC.hs. He implements the full linear lambda calculus with additives and units.

- We present a technique for compiling lambda-calculus expressions into
SKI combinators. Unlike the well-known bracket abstraction based on
(syntactic) term re-writing, our algorithm relies on a specially
chosen,
*compositional*semantic model of generally open lambda terms. The meaning of a closed lambda term is the corresponding SKI combination. For simply-typed as well as unityped terms, the meaning derivation mirrors the typing derivation. One may also view the algorithm as an algebra, or a non-standard evaluator for lambda-terms (i.e., denotational semantics).The algorithm is implemented as a tagless-final compiler for (uni)typed lambda-calculus embedded as a DSL into OCaml. Its type preservation is clear even to OCaml. The correctness of both the algorithm and of its implementation becomes clear.

Our algorithm is easily amenable to optimizations. In particular, its output and the running time can both be made linear in the size (i.e., the number of all constructors) of the input De Bruijn-indexed term.

**Version**- The current version is May 2018
**References**- ski.pdf [354K]

The paper presented on May 9, 2018 at FLOPS 2018 (Nagoya, Japan) and published in Springer's LNCS 10818, pp. 33-50, 2018

doi:10.1007/978-3-319-90686-7_3skconv.ml [34K]

The complete code accompanying the paper

The code presents the basic SKI translation and all optimizations described in the paper, along with several interpreters.

- Olivier Danvy's original POPL96 paper on type-directed partial
evaluation used an untyped target language, represented as an
algebraic data type. Type preservation was not apparent and had
to be proved. In our presentation, the result of reification is a
*typed*expression, in the tagless-final form. Type preservation of reification is now syntactically apparent and is verified by the Haskell type-checker. In the tagless-final presentation, reification and reflection seem particularly symmetric, elegant and insightful.TDPE.hs [6K]

Tagless-final presentation of type-directed partial evaluationToTDPE.hs [<1K]

The imported module with sample functions to reify. Compiling this module makes for a nicer example.<http://www.brics.dk/~danvy/tdpe-ln.pdf>

Olivier Danvy: Lecture notes on type-directed partial evaluation. The lecture notes are based on his POPL96 paper.

- We demonstrate a tagless-final embedding of the simply-typed
lambda-calculus with integers and constants, and its uniform
interpretations with three different evaluation strategies:
call-by-name, call-by-value and call-by-need. The three interpreters
share almost all the code, differing only in the interpretation of
object-language abstractions (
`lam`

-forms). The semantics of abstraction is indeed what sets the three evaluation orders apart. The examples also implement tracing of the interpreter actions, to make clear which expressions are evaluated, in which sequence and how many times. The interpreters are assuredly and patently type-preserving: the host language -- OCaml and Haskell -- guarantee that. We thus obtain a higher-order embedded domain-specific language with the selectable evaluation order.In Haskell, we define the DSL as a type class

`Symantics`

. The type class declaration specifies the syntax and its instances define the semantics of the language. Our DSL is typed. We use the Haskell type constant`Int`

to also denote the DSL type of integers (although we could have used a different name). DSL function types are represented in Haskell by the type synonym`Arr repr a b`

:class Symantics repr where int :: Int -> repr Int -- integer literals add :: repr Int -> repr Int -> repr Int -- addition sub :: repr Int -> repr Int -> repr Int -- and subtraction app :: repr (Arr repr a b) -> repr a -> repr b -- application type Arr repr a b = repr a -> repr b

This somewhat unusual representation of DSL functions permits the uniform implementation of evaluation strategies. The class`Semantics`

specifies the part of our calculus that is interpreted the same way in all evaluation strategies. Only DSL abstractions are interpreted differently; therefore, we make them a distinct DSL `feature', defined by its own type class:class SymLam repr where lam :: (repr a -> repr b) -> repr (Arr repr a b)

After introducing the convenient `macro'`let_`

(which could have been called `bind') we write a sample object language term as follows:let_ :: (Symantics repr, SymLam repr) => repr a -> (repr a -> repr b) -> repr b let_ x y = (lam y) `app` x -- The representation of the lambda-calculus term -- (\z x -> let y = x + x in y + y) (100 - 10) (5 + 5) t2 :: (Symantics repr, SymLam repr) => repr Int t2 = (lam $ \z -> lam $ \x -> let_ (x `add` x) $ \y -> y `add` y) `app` (int 100 `sub` int 10) `app` (int 5 `add` int 5)

The Haskell expression`t2`

represents the lambda-calculus term`(\z x -> let y = x + x in y + y) (100 - 10) (5 + 5)`

; the variable`z`

does not occur in the body of the abstraction whereas`x`

occurs twice.We are ready to write interpreters for our DSLs. We will interpret DSL expressions of the type

`a`

as Haskell values of the type`m a`

where`m`

is some`MonadIO`

.`IO`

is needed to print the trace of interpreter actions. The representation is common to the three evaluation strategies. To distinguish them (when interpreting`lam`

later on) we add a phantom type parameter, the label`l`

. It will take the values`Name`

,`Value`

and`Lazy`

; more strategies may be added later on.newtype S l m a = S { unS :: m a } deriving (Monad, Applicative, Functor, MonadIO)

Here is the code common to all interpreters regardless of the evaluation strategy. The reasons to parameterize the interpreter over`MonadIO`

is to print out the evaluation trace, so that we can see the difference among the evaluation strategies in the number of performed additions and subtractions.instance MonadIO m => Symantics (S l m) where int = return add x y = do a <- x b <- y liftIO $ putStrLn "Adding" return (a + b) sub x y = do a <- x b <- y liftIO $ putStrLn "Subtracting" return (a - b) app x y = x >>= ($ y)

Call-by-name is defined by the following interpretation of DSL abstractions:data Name -- The label instance Monad m => SymLam (S Name m) where lam f = return f

The abstraction, when applied, will receive an unevaluated argument expression -- of the type`repr a`

, or`S Name m a`

-- using (substituting) it as it is.We evaluate the sample term under call-by-name

runName :: S Name m a -> m a runName x = unS x t2SN = runName t2 >>= print

obtaining the result 40 and observing from the trace that subtraction was not performed: the value of`int 100 `sub` int 10`

was not needed to compute the result of`t2`

. On the other hand, the sub-expression`int 5 `add` int 5`

was evaluated four times.Call-by-value differs from call-by-name only in the interpretation of abstractions:

data Value instance Monad m => SymLam (S Value m) where -- lam f = return (\x -> (f . return) =<< x) -- or, in the pointless notation, as below: lam f = return (f . return =<<)

As in the call-by-name evaluator, the lambda-abstraction receives an unevaluated argument expression -- evaluating it before passing its result to the abstraction body`f`

. This is literally the definition of call-by-value. The very same sample term`t2`

can be interpreted differently:runValue :: S Value m a -> m a runValue x = unS x t2SV = runValue t2 >>= print

giving in the end the same result 40. Although the result of the subtraction`int 100 `sub` int 10`

in`t2`

was not needed, the trace shows it performed. On the other hand, the argument sub-expression`int 5 `add` int 5`

was evaluated only once. In call-by-value, arguments are evaluated exactly once.The call-by-need evaluator differs from the others again in one line, the interpretation of abstractions:

data Lazy instance MonadIO m => SymLam (S Lazy m) where -- lam f = return (\x -> f =<< share x) -- Or, in the pointless notation lam f = return ((f =<<) . share)

The evaluation of the body of the abstraction always starts by lazy sharing the argument expression. (The implementation of`share`

is straightforward; see the code for details.) Again, this is the definition of call-by-need. We run the very same term`t2`

with the new evaluator, obtaining the same result 40 and observing from the execution trace that subtraction was not evaluated (because it was not needed) but the needed argument expression`int 5 `add` int 5`

was evaluated once. In call by need, application arguments are evaluated*at most*once. **Version**- The current version is 1.2, March 17, 2010
**References**- CBAny.hs [7K]

The complete code with examples and testscall_by_any.ml [6K]

The OCaml version of the code, using modules and functorsCB.hs [7K]

The older version of the code

It was originally posted as ``CBN, CBV, Lazy in the same final tagless framework'' on the Haskell-Cafe mailing list on Thu, 8 Oct 2009 00:54:14 -0700 (PDT).CB98.hs [7K]

The same code in Haskell98, proving that the tagless final approach indeed requires fewer fancy type system features. The fancy features like the type families, used in CB.hs, add convenience and gloss. One can do without them however. The code can easily be re-written in OCaml, which has no typeclasses, type families and other bleeding-edge features.Parameterized syntax: interpreters without run-time interpretive overhead

Evaluating the same Scheme source language expression using call-by-value, call-by-reference, and call-by-name evaluation strategies.

- Jacques Carette has posed a problem of designing an embedded DSL for
Math layout, with subscripts, superscripts,
tables, fractions, and other standard features.
The language has to support an
*extensible*set of glyphs, such as Greek letters and special characters. It should be possible to add more glyphs later, similar to `using'`stmaryrd`

, etc. in LaTeX. Adding a new glyph should not affect the existing documents in any way. Another requirement is an extensible set of renderers (formatters), such as plain text, LaTeX, HTML+MathML, etc. One should be able to add a new formatter later, without having to re-write or even recompile the existing documents. A formatter does not have to support all layout features or glyphs. However, an attempt to render a document with an unsuitable formatter should be flagged as a type error.The tagless-final approach seems to easily fulfill all the requirements, as the enclosed code demonstrates.

**Version**- The current version is May 2016
**References**- MathLayout.hs [5K]

Haskell code outlining the embedding of the layout language, with LaTeX and HTML backends

- Lambek calculus is a resource-sensitive calculus introduced by Lambek
in 1958, almost three decades before linear logic. Like linear logic,
Lambek calculus does not have the weakening rule. In fact, in the
non-associative Lambek calculus NL, the antecedent of a sequent is a
tree and there are
*no*structural rules at all. Lambek calculus is hence the purest, simplest, and the earliest substructural logic. Adding the associativity and commutativity rules (that is, treating the antecedent as a sequence rather than a tree, and allowing exchange) turns Lambek calculus into a fragment of the Multiplicative Linear Logic (MILL).The most noticeable difference of Lambek calculus from the conventional or linear lambda-calculus is its

*directional*implications and abstractions. There are two function types typically written as`B / A`

and`A \ B`

, called left/right slash-types rather than arrow types. The function of the type`B / A`

accepts the argument of the type`A`

on the right;`A \ B`

accepts the`A`

argument on the left. There are hence two rules for eliminating implications and, correspondingly, two rules for introducing them, which bring in the power of hypothetical reasoning. Although the NL calculus per se has no structural rules, various NL theories add so-called structural postulates: the ways to rearrange the antecedent structure in particular limited ways. All these features set Lambek calculi even farther apart from the Haskell type system. And yet it can be embedded in Haskell, in the tagless-final approach. All and only valid derivations are represented by Haskell values of a particular`repr`

type. One tagless-final interpreter prints the ``yield'', the constants used in the derivation. In linguistic applications the yield spells out the sentence whose parse is represented by the derivation. Other interpreters transform the derivation to a logical formula that stands for the meaning of the sentence.A so-called symmetric, Lambek-Grishin calculus has, in addition to directional implications, directional co-implications and interesting symmetric structural rules for moving between the antecedent and the consequent structures. It too can be represented in the tagless-final style. The semantic interpretation builds the meaning formula in the continuation-passing style.

**References**- Michael Moortgat: Typelogical Grammar

The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta (ed.)

<http://plato.stanford.edu/entries/typelogical-grammar/>HOCCG.hs [28K]

Explanation of the code

Non-associative Lambek calculus NL with the non-standard semantic interpretation. Applications to quantification, non-canonical coordination and gapping.LG.hs [10K]

Tagless-final embedding of the Lambek-Grishin symmetric calculus and its 1-CPS translation

Our starting point is the regular CBV CPS translation for lambda-LG, described on p. 697 of the paper by Michael Moortgat ``Symmetric Categorial Grammar''. J. Philos. Logic, 2009. The original translation (eq (20) of the paper) yields many administrative beta redices. The present translation uses lightweight staging to remove such redices in the process.

- Sandro Magi shows that different interpretations of the same DSL term may not only involve different run-time systems but also occur on different hosts.
**Version**- The current version is June 23, 2009
**References**- <http://higherlogics.blogspot.com/2009/06/mobile-code-in-c-via-finally-tagless.html>

The article by Sandro Magi