The calculus is implemented by embedding it into pure OCaml in the tagless-final style. The implementation was used to run all the examples in the paper and test the derived equational laws. One may use it for further experimenting and reasoning with (multiple) algebraic effects.
Our goal here is not to evaluate, view or serialize a tagless-final term, but to transform it to another one. The result is a tagless-final term, which we can interpret in multiple ways: evaluate, view, or transform again. It is natural to require the result of transforming a well-typed term be well-typed. In the tagless-final approach that requirement is satisfied automatically: after all, only well-typed terms are expressible. We impose a more stringent requirement that a transformation be total. In particular, the fact that the transformation handles all possible cases of the source terms must be patently, syntactically clear. The complete coverage must be so clear that the metalanguage compiler should be able to see that, without the aid of extra tools.
Since the only thing we can do with tagless-final terms is to interpret them, the CPS transformer is written in the form of an interpreter. It interprets source terms yielding transformed terms, which can be interpreted in many ways. In particular, the terms can be interpreted by the CPS transformer again, yielding 2-CPS terms. CPS transformers are composable, as expected.
A particular complication of the CPS transform is that the type of the result is different from the type of the source term: the CPS transform translates not only terms but also types. Moreover, base types and the arrow type are translated in different ways. To express CPS, we need an interpreter that gives the meaning not only to terms but also to types. In particular, what the function types denote should be up to a particular interpreter. It turns out the existing tagless-final framework is up to that task: with the help of type families, we can after all define an instance of Symantics that interprets source types as CPS-transformed types.
The ordinary (Fischer or Plotkin) CPS transform introduces many administrative redices, which make the result too hard to read. Danvy and Filinski proposed a one-pass CPS transform, which relies on the metalanguage to get rid of the administrative redices. The one-pass CPS transform can be regarded as an example of the normalization-by-evaluation.
CPS.hs [10K]
Ordinary and one-pass CPS transforms and their compositions
TTFdBHO.hs [2K]
The simplest tagless-final transformer, from the De-Bruijn--based
Symantics to the one based on the higher-order abstract syntax
Olivier Danvy and Andrzej Filinski.
Representing Control: A Study of the CPS Transformation.
Mathematical Structures in Computer Science, 1992.
The staged language is embedded into OCaml in the tagless-final style,
as the signature Code
and its three implementations:
CodeReal
(in terms of the ordinary OCaml), CodeSting
(performing
code generation via strings), and CodeCode
(in terms of MetaOCaml).
The CodeReal
implementation -- of the staged calculus is terms of
the ordinary, unstaged OCaml -- embodies an unstaging translation.
It is orders of magnitude simpler compared to the other such
translations described in literature, requiring no fancy types or
type-level operations. The staged
language can be extended to support the polymorphic let
.
polylet.ml [17K]
The OCaml code for the embedding of the staged language into OCaml, and
a few examples
Closing the Stage:
From Staged Code to Typed Closures
The previous, immensely more complex unstaging translation
{Tagless-Staged: a step toward MetaHaskell
Type- and
Scope-Safe Code Generation with Mutable Cells
The tagless-final embedding, in Haskell and OCaml, of a staged calculus
with a rather complicated, MLF-like type system. The type system ensures
scope safety of the generated code even in the presence of effects.
stmaryrd
, etc.
in LaTeX. Adding a new glyph should not affect the existing documents
in any way. Another requirement is an extensible set of renderers (formatters),
such as plain text, LaTeX, HTML+MathML, etc. One should be able to add a new
formatter later, without having to re-write or
even recompile the existing documents. A formatter does not have to support
all layout features or glyphs. However, an attempt to render a document
with an unsuitable formatter should be flagged as a type error.
The tagless-final approach seems to easily fulfill all the requirements, as the enclosed code demonstrates.
The algorithm is implemented as a tagless-final compiler for (uni)typed lambda-calculus embedded as a DSL into OCaml. Its type preservation is clear even to OCaml. The correctness of both the algorithm and of its implementation becomes clear.
Our algorithm is easily amenable to optimizations. In particular, its output and the running time can both be made linear in the size (i.e., the number of all constructors) of the input De Bruijn-indexed term.
skconv.ml [34K]
The complete code accompanying the paper
The code presents the basic SKI translation and all optimizations
described in the paper, along with several interpreters.
We build on the embedding of the ordinary simply-typed lambda calculus
with De Bruijn indices described earlier. An object term of the type
a
was represented as a value
of the type repr h a
, where the binary type constructor
repr
is a member of the class Symantics
.
Here h
stands for the type environment, assigning types
to free variables (`hypotheses') of a term. Linear lambda calculus
regards bound variables as representing resources; referencing a
variable consumes the resource. We use the type environment for
tracking the state of resources: available or consumed.
The type environment becomes the type state.
We follow the approach described in
Variable (type)state `monad'.
We represent linear lambda terms by Haskell values of the type
repr hi ho a
, where hi
stands for the variable state before evaluating the term and
ho
represents the state after evaluating the term.
To be more precise, hi
and ho
are the types of the variable states. We can determine the types
and hence the state of the variables statically: As usual,
the type checker does abstract interpretation.
In our tagless-final encoding, lam
has the following type
lam :: repr (F a,hi) (U,ho) b -> repr hi ho (a->b)The expression
(lam e)
has the type repr hi ho (a->b)
provided the body of abstraction, e
, has the type
repr (F a,hi) (U,ho) b
.
That is, in the environment extended with a term of the type
a
, the body must produce the value of type b
.
The body must consume the term at the top of the environment,
changing the type of the first environment cell from
F a
to U
(the type of the used variable).
A trivial modification turns the embedding of the linear lambda-calculus into that of the affine lambda-calculus, which allows to ignore bound variables. K combinator becomes expressible.
Jeff Polakow: Embedding a full linear Lambda calculus in Haskell
Proceedings of the ACM SIGPLAN Haskell Symposium, 2015,
pp. 177-188.
Polakow's tagless-final linear lambda-calculus interpreter
relies on higher-order abstract syntax, rather than De Bruijn
indices of LinearLC.hs. He implements the full linear lambda calculus
with additives and units.
In the standard (original) presentation of lambda calculus, variables
are denoted by symbolic names (strings); a binding location, `lambda',
is labeled by the name of the variable it binds. De Bruijn's insight
was to replace names with integers -- `nameless dummies' -- which
identify the corresponding binding occurrence by the position in the
sequence of nested lambdas. De Bruijn indices count the position
`inside-out' (so that index 0 points to the innermost lambda), whereas
De Bruijn levels count in the opposite direction (hence the outermost
lambda is at level 0). For example, the S combinator, which is
\f\g\x. (f x) g x
in the namefull notation, is written as
lam lam lam 2 0 (1 0)
in the De Bruijn indices notation, and
lam lam lam 0 2 (1 2)
in the De Bruijn levels notation. With indices or
levels, we no longer have to label binding locations, and, mainly, no longer
need alpha-conversion. We can compare terms without
needing to rename anything.
In the indices notation, lam 0
denotes the identity function -- whether this
term appears by itself or as part of a larger term, for example,
lam lam ((lam 0) (1 0))
(which is \f\x. (\z.z) (f x)
). In contrast,
De Bruijn-levels terms are inherently unmodular: the identity function
is represented by lam 0
if it appears standalone. In
\f\x. (\z.z) (f x)
, the very same identity subterm is denoted
as lam 2
. On the other hand, the term
\x\f. (\y\g. g y z) (f x)
in the indices notation becomes
lam lam (lam lam (0 1 3)) (0 1)
. The variable x
, bound by the
outermost lambda, is denoted either by 1
or by 3
, depending on its
occurrence within the term. The same term in the levels notation,
lam lam (lam lam (3 2 0)) (1 0)
, shows that all occurrences of the
same variable are denoted by the same integer. De Bruijn-levels terms
are therefore much easier to read.
Terms in the indices notation are, however, much easier to embed. The
typing rules of the calculus (for z
and lam
) give a hint:
t, G |- z : t t, G |- e : t' ------------------ G |- lam e : t->t'The type environment
G
is an ordered sequence (a list); lam
pushes
the argument type to the top of the list, and z
(the variable of the
index 0) retrieves it from there. This
symmetry leads to a simple implementation, with G
represented by
a nested tuple. Here is the tagless-final embedding, as an OCaml
module signature
module type LamIx = sig type ('g,'a) repr val z: ('a*'g,'a) repr val s: ('g,'a) repr -> (_*'g,'a) repr val lam: ('a*'g,'b) repr -> ('g,'a->'b) repr val (/): ('g,'a->'b) repr -> ('g,'a) repr -> ('g,'b) repr (* application *) endThe inferred type of the sample term
lam (lam (s z / s (s z) / z))
, namely,
('a * 'b, ('a -> 'c -> 'd) -> 'c -> 'd) L.repr
shows it to be an
open term of the type ('a -> 'c -> 'd) -> 'c -> 'd
with one free
variable of the type 'a
. We aim at the similar typeful embedding of
the levels-based calculus: the inferred type of the OCaml value should
tell not just the type of the term but also its free variables and
their types.
However, the embedding of the levels-based calculus seems next to impossible. Again, consider the representative typing rules:
t, G |- z : t G, t |- e : t' ------------------ G |- lam e : t->t'Whereas
z
retrieves the type from one end of the typing environment
G
, lam
pushes its binding variable type to the other end. Our
embedding should be able to instruct the type-checker to
push and pop items to/from a queue of an arbitrary length. Type-level
functions seem inevitable.
Let us think more of the inherent unmodularity of De Bruijn levels.
Let v0
denote a variable with the
level 0
and v1
denote the variable with the level 1
.
The open term v0 v1
has two free variables -- which should be
apparent in the type of its embedding. If we use this term to build a
bigger term lam (v0 v1)
, which of the two variables the lam
binds?
We cannot actually tell. If lam (v0 v1)
is standalone, lam
binds v0
and v1
is left free. If the term is a part of
lam (lam (v0 v1))
, the inner lam
binds v1
. As a part of
lam (lam (lam (v0 v1)))
, the innermost lam
binds neither v0
nor
v1
. When building terms bottom-up, from fragments, we may hence need to
somehow tell the level of a lam
in the complete term -- or assign the
level to a lam
. This is the solution. It relies on the
library of heterogeneous lists Hlist_simple
, which provides just
what we need: a lens, to retrieve an element from a
heterogeneous list based on its `index', and to replace the element (with
a value of a different type).
Here is the embedding of the De Bruijn-levels calculus, again as an OCaml signature:
module type LamLev = sig type ('g,'a) repr val v: ('g,'a,_,_) lens -> ('g,'a) repr val lam: ('g1,_,'a,'g) lens -> ('g,'b) repr -> ('g1,'a->'b) repr val (/): ('g,'a->'b) repr -> ('g,'a) repr -> ('g,'b) repr (* application *) end(We could have used
z
and s
from the indices embedding; we went
for the uniformity of reference and binding occurrences.)
The type ('g,'a,'b,'g1) lens
of the Hlist_simple
library is the
type of the lens into the heterogeneous sequence 'g
to retrieve an
element of the type 'a
. If that element is later replaced by a value of
the type 'b
, the entire sequence type changes to 'g1
. To construct
values of the lens
type, the library provides the constructors
LHere
and LNext
.
The running example is now written as
lam n1 (lam n2 (v n1 / v n0 / v n2))
, where we have defined
n0
to be LHere
, n1
to be LNext n0
and n2
to be LNext n1
.
The inferred type is
('a * ('b * ('c * 'd)), ('a -> 'e -> 'f) -> 'e -> 'f) L.repr
.
It tells that the term has the type ('a -> 'e -> 'f) -> 'e -> 'f
and has one free variable at the level 0 of the type 'a
. The type
also tells that the term uses two other slots in the environment (for
bound variables). Leaking the information about bound variables betrays
from the unmodularity of De Bruijn levels. One may also view this
information as a hint as to how much space to reserve in
the environment at run-time.
In conclusion, by bringing the labels to binding occurrences, we made the construction of terms a bit more predictable. Mainly, we were able to embed the De Bruijn-levels calculus in a host language with GADTs, without resorting to full-blown type-level programming or dependent types.
I thank Simon Charlow for the insightful question.
Hlist_simple
)
OCaml code and examples, including two interpreters:
the evaluator and the pretty printer. For the sake of comparison, the code
also contains the tagless-final embedding of the lambda calculus
with De Bruijn indices.
Typed heterogeneous collections: Lightweight HList
The dependency: the Hlist_simple
library
TDPE.hs [6K]
Tagless-final presentation of type-directed partial evaluation
ToTDPE.hs [<1K]
The imported module with sample functions to reify.
Compiling this module makes for a nicer example.
<http://www.brics.dk/~danvy/tdpe-ln.pdf>
Olivier Danvy: Lecture notes on type-directed partial evaluation.
The lecture notes are based on his POPL96 paper.
lam
-forms). The semantics of
abstraction is indeed what sets the three evaluation orders apart.
The examples also implement tracing of the interpreter
actions, to make clear which expressions are
evaluated, in which sequence and how many times. The interpreters are
assuredly and patently type-preserving: the host language -- OCaml and
Haskell -- guarantee that. We thus obtain a higher-order embedded
domain-specific language with the selectable evaluation order.
In Haskell, we define the DSL as a type class Symantics
.
The type class declaration specifies the syntax and its instances define
the semantics of the language. Our DSL is typed. We use the Haskell type
constant Int
to also denote the DSL type of integers (although we could
have used a different name). DSL function types are represented
in Haskell by the type synonym Arr repr a b
:
class Symantics repr where int :: Int -> repr Int -- integer literals add :: repr Int -> repr Int -> repr Int -- addition sub :: repr Int -> repr Int -> repr Int -- and subtraction app :: repr (Arr repr a b) -> repr a -> repr b -- application type Arr repr a b = repr a -> repr bThis somewhat unusual representation of DSL functions permits the uniform implementation of evaluation strategies. The class
Semantics
specifies the part of our calculus that is interpreted the same
way in all evaluation strategies. Only DSL abstractions are
interpreted differently; therefore, we make them a distinct
DSL `feature', defined by its own type class:
class SymLam repr where lam :: (repr a -> repr b) -> repr (Arr repr a b)After introducing the convenient `macro'
let_
(which could have
been called `bind') we write a sample object language term as follows:
let_ :: (Symantics repr, SymLam repr) => repr a -> (repr a -> repr b) -> repr b let_ x y = (lam y) `app` x -- The representation of the lambda-calculus term -- (\z x -> let y = x + x in y + y) (100 - 10) (5 + 5) t2 :: (Symantics repr, SymLam repr) => repr Int t2 = (lam $ \z -> lam $ \x -> let_ (x `add` x) $ \y -> y `add` y) `app` (int 100 `sub` int 10) `app` (int 5 `add` int 5)The Haskell expression
t2
represents the lambda-calculus term
(\z x -> let y = x + x in y + y) (100 - 10) (5 + 5)
;
the variable z
does not occur in the body of the abstraction whereas
x
occurs twice.
We are ready to write interpreters for our DSLs. We will interpret
DSL expressions of the type a
as Haskell values of the type
m a
where m
is some MonadIO
. IO
is needed to print the trace
of interpreter actions. The representation is common to the three
evaluation strategies. To distinguish them (when interpreting lam
later on)
we add a phantom type parameter, the label l
. It will take the values
Name
, Value
and Lazy
; more strategies may be added later on.
newtype S l m a = S { unS :: m a } deriving (Monad, Applicative, Functor, MonadIO)
Here is the code common to all interpreters regardless of the
evaluation strategy. The reasons to parameterize the interpreter over MonadIO
is to print out the evaluation trace, so that we can see the difference
among the evaluation strategies in the number of performed
additions and subtractions.
instance MonadIO m => Symantics (S l m) where int = return add x y = do a <- x b <- y liftIO $ putStrLn "Adding" return (a + b) sub x y = do a <- x b <- y liftIO $ putStrLn "Subtracting" return (a - b) app x y = x >>= ($ y)Call-by-name is defined by the following interpretation of DSL abstractions:
data Name -- The label instance Monad m => SymLam (S Name m) where lam f = return fThe abstraction, when applied, will receive an unevaluated argument expression -- of the type
repr a
, or S Name m a
-- using (substituting) it
as it is.
We evaluate the sample term under call-by-name
runName :: S Name m a -> m a runName x = unS x t2SN = runName t2 >>= printobtaining the result 40 and observing from the trace that subtraction was not performed: the value of
int 100 `sub` int 10
was not needed to compute the result of t2
.
On the other hand, the sub-expression int 5 `add` int 5
was evaluated
four times.
Call-by-value differs from call-by-name only in the interpretation of abstractions:
data Value instance Monad m => SymLam (S Value m) where -- lam f = return (\x -> (f . return) =<< x) -- or, in the pointless notation, as below: lam f = return (f . return =<<)As in the call-by-name evaluator, the lambda-abstraction receives an unevaluated argument expression -- evaluating it before passing its result to the abstraction body
f
.
This is literally the definition of call-by-value. The very same sample
term t2
can be interpreted differently:
runValue :: S Value m a -> m a runValue x = unS x t2SV = runValue t2 >>= printgiving in the end the same result 40. Although the result of the subtraction
int 100 `sub` int 10
in t2
was not needed, the trace shows it performed. On the other hand,
the argument sub-expression int 5 `add` int 5
was evaluated only once.
In call-by-value, arguments are evaluated
exactly once.
The call-by-need evaluator differs from the others again in one line, the interpretation of abstractions:
data Lazy instance MonadIO m => SymLam (S Lazy m) where -- lam f = return (\x -> f =<< share x) -- Or, in the pointless notation lam f = return ((f =<<) . share)The evaluation of the body of the abstraction always starts by lazy sharing the argument expression. (The implementation of
share
is straightforward;
see the code for details.) Again, this is the definition of
call-by-need. We run the very same term t2
with the new
evaluator, obtaining the same result 40 and observing from the execution
trace that subtraction was not evaluated (because it was not needed)
but the needed argument expression int 5 `add` int 5
was evaluated once. In call by need, application arguments
are evaluated at most once.call_by_any.ml [7K]
The OCaml version of the code, using modules and functors
CB.hs [7K]
The older version of the code
It was originally posted as
``CBN, CBV, Lazy in the same final tagless framework''
on the Haskell-Cafe mailing list
on Thu, 8 Oct 2009 00:54:14 -0700 (PDT).
CB98.hs [7K]
The same code in Haskell98, proving that the tagless final approach
indeed requires fewer fancy type system features. The fancy
features like the type families, used in CB.hs, add convenience and gloss.
One can do without them however.
The code can be (and has been) easily re-written in OCaml, which
has no typeclasses, type families and other bleeding-edge
features.
Parameterized syntax:
interpreters without run-time interpretive overhead
Evaluating the same Scheme source language expression using call-by-value,
call-by-reference, and call-by-name evaluation strategies.
The most noticeable difference of Lambek calculus from the conventional
or linear lambda-calculus is its directional implications and
abstractions. There are two function types typically written as B / A
and
A \ B
, called left/right slash-types rather than arrow types.
The function of the type B / A
accepts the argument
of the type A
on the right; A \ B
accepts the A
argument
on the left. There are hence two rules for eliminating implications
and, correspondingly, two rules for introducing them, which bring in
the power of hypothetical reasoning. Although the NL calculus per se
has no structural rules, various NL theories add so-called
structural postulates: the ways to rearrange the antecedent structure in
particular limited ways.
All these features set Lambek calculi even
farther apart from the Haskell type system. And yet it can be embedded
in Haskell, in the tagless-final approach. All and only valid
derivations are represented by Haskell values of a particular repr
type. One tagless-final interpreter prints the ``yield'', the
constants used in the derivation. In linguistic applications
the yield spells out the sentence whose parse is represented by the
derivation. Other interpreters transform the derivation to a logical
formula that stands for the meaning of the sentence.
A so-called symmetric, Lambek-Grishin calculus has, in addition to directional implications, directional co-implications and interesting symmetric structural rules for moving between the antecedent and the consequent structures. It too can be represented in the tagless-final style. The semantic interpretation builds the meaning formula in the continuation-passing style.
Lambek Grammars as an embedded DSL: a tool for semanticists
Embedding of Lambek Grammars based on the associative Lambek calculus
HOCCG.hs [28K]
Explanation of the code
Non-associative Lambek calculus NL with the non-standard
semantic interpretation. Applications to quantification,
non-canonical coordination and gapping.
LG.hs [10K]
Tagless-final embedding of the Lambek-Grishin symmetric calculus
and its 1-CPS translation
Our starting point is the regular CBV CPS translation for
lambda-LG, described on p. 697 of the paper by Michael Moortgat
``Symmetric Categorial Grammar''. J. Philos. Logic, 2009.
The original translation
(eq (20) of the paper) yields many administrative beta redices.
The present translation uses lightweight staging to remove
such redices in the process.