Our framework is not a an experimental language with a new syntax and uncertain future. Rather, it is a regular, not a bleeding edge, library in mature Haskell. It is modular and extensible: one can add to the target language more features and constants or change it to be typed or untyped, first- or higher-order. Polymorphism over generated environments makes generators composable. The higher-order abstract syntax used in the library makes target functions and variables human-readable. Our library may be regarded as `staged Haskell'.
There are many code generation frameworks with one or the other of the features. Ours has all of them. Template Haskell, among other systems, falls short, assuring -- dynamically rather than statically -- the well-typedness of only the final code. Furthermore, although the generated code, after a type-check, is well-typed, it may still be ill-scoped.
The static types of our generator expressions not only ensure that a well-typed generator produces well-typed and well-scoped code. They also express the lexical scopes of generated binders and prevent mixing up variables with different scopes. For the first time we demonstrate, in an embedded domain-specific language, statically safe and well-scoped loop interchange and constant factoring from arbitrarily nested loops.
Joint work with Yukiyoshi Kameyama and Chung-chieh Shan.
The annotated slides of the talk presented at PEPM 14 on January 20, 2014 in San Diego, CA, USA.
Region Memory Management for Free Variables: Type- and Scope-Safe Code Generation with Mutable Cells
The formalization of one part of the combinator library, as the calculus <NJ>
The core of the code generation library
Code generation with control effects reaching beyond the closest binder. Far-reaching let-insertion. The implementation of the
CPSA applicative hierarchy.
Examples and Sample code
Here is a sample MetaOCaml code and its Scheme translation
# let eta = fun f -> .<fun x -> .~(f .<x>.)>. in .<fun x -> .~(eta (fun y -> .<x + .~y>.))>. - : ('a, int -> int -> int) code = .<fun x_1 -> fun x_2 -> (x_1 + x_2)>. (let ((eta (lambda (f) (bracket (lambda (x) (escape (f (bracket x)))))))) (bracket (lambda (x) (escape (eta (lambda (y) (bracket (+ x (escape y)))))))))Translating MetaOCaml code into Scheme seems trivial: code values are like S-expressions, MetaOCaml's bracket is like quasiquote, escape is like unquote, and `run' is eval. If we indeed replace bracket with the quasiquote and escape with the unquote in the above Scheme code and then evaluate it, we get the S-expression
(lambda (x) (lambda (x) (+ x x))), which is a wrong result, quite different from the code expression
.<fun x_1 -> fun x_2 -> (x_1 + x_2)>.produced by MetaOCaml. The latter is the code of a curried function that adds two integers. The S-expression produced by the naive Scheme translation represents a curried function that disregards the first argument and doubles the second. This demonstrates that the often mentioned `similarity' between the bracket and the quasiquote is flawed. MetaOCaml's bracket respects alpha-equivalence; in contrast, Scheme's quasiquotation, being a general form for constructing arbitrary S-expressions (not necessarily representing any code), is oblivious to the binding structure.
Our implementation still uses S-expressions for code values (so we can print them and
eval-uate them) and treats escape as unquote. To maintain the hygiene, we need to make sure that every run-time evaluation of a bracket form such as
(bracket (lambda (x) x)) gives
'(lambda (x_new) x_new) with the unique bound variable
x_new . Two examples in the source code demonstrate why static renaming of manifestly bound identifiers is not sufficient. We implement the very clever suggestion by Chung-chieh Shan and represent a staged expression such as
.<(fun x -> x + 1) 3>. by the sexp-generating expression
`(,(let ((x (gensym))) `(lambda (,x) (+ ,x 1))) 3) which evaluates to the S-expression
((lambda (x_1) (+ x_1 1)) 3) . Thus
bracket is a complex macro that transforms its body to the sexp-generating expression, keeping track of the levels of brackets and escapes. The macro
bracket is implemented as a CEK machine with the defunctionalized continuation. In our implementation, the Scheme translation of the eta-example yields the S-expression
(lambda (g6) (lambda (g8) (+ g6 g8))) . Just as in MetaOCaml, the result represents the curried addition.
CSP poses another implementation problem. In MetaScheme we can write the MetaOCaml expression
.<fun x -> x + 1>. as
(bracket (lambda (x) (+ x 1))) , which yields an S-expression
(lambda (g1) (+ g1 1)) . When we pass this code to eval, the identifier
+ will be looked up in the environment of eval, which is generally different from the environment that was in effect when the original bracket form was evaluated. That might not look like much of a difference since
+ is probably bound to the same procedure for adding numbers in either environment. This is no longer the case if we take the following MetaOCaml code and its putative MetaScheme translation:
let test = let id u = u in .<fun x -> id x>. (define test (let ((id (lambda (u) u))) (bracket (lambda (x) (id x)))))The latter definition binds
testto the S-expression
(lambda (x) (id x))that contains a free identifier
id, unlikely to be bound in the environment of eval. Our code values therefore should be `closures', being able to include values that are (possibly locally) bound in the environment when the code value was created. Incidentally, the problem of code closures closed over the environment of the generator also appears in syntax-case macros. R6RS editors made a choice to prohibit the occurrence of locally-bound identifiers in the output of a syntax-case transformer.
MetaOCaml among other staged systems does permit the inclusion of values from the generator stage in the generated code. Such values are called CSP; they are evident in the output of the MetaOCaml interpreter for the above test:
val test : ('a, 'b -> 'b) code = .<fun x_1 -> (((* cross-stage persistent value (as id: id) *)) x_1)>.In MetaScheme, we have to write CSP explicitly:
(% e), which is often called `lift'.
(define test (let ((id (lambda (u) u))) (bracket (lambda (x) ((% id) x)))))One may think that such a lifting is already available in Scheme:
(define (lift x) (list 'quote x)). Although this works in the simple cases of
xbeing a number, a string, etc., it is neither universal nor portable: attempting this way to lift a closure or an output-port could be an error. According to R5RS, the argument of a quotation is an external representation of a Scheme datum. Closures, for example, are not guaranteed to have an external representation. For portability, we implement CSP via a reference in a global array, taking advantage of the fact that both the index (a number) and the name of the array (an identifier) have external representations and hence are trivially liftable by quotation. This is precisely the mechanism used by the current version of MetaOCaml.
The source code contains many more examples of the translation of MetaOCaml code into MetaScheme -- including the examples with many stages and CSP.
Joint work with Chung-chieh Shan.
letbindings. Rather, they can program an automatic insertion at the `right' and assuredly safe place. The end-users of the DSL do not have to worry about let-bindings or even know about them. This safe and modular let-insertion has become possible in MetaOCaml for the first time.
Program generators, as programs in general, greatly benefit from compositionality: the meaning of a complex expression is determined by its structure and the meanings of its constituents. Building code fragments using MetaOCaml's brackets and escapes is compositional. For example, in
let sqr: int code -> int code = fun e -> .<.~e * .~e>.the meaning of
sqr e-- that is, the code it generates -- is the multiplication of two copies of the expression generated by
e. We are sure of that even if we know nothing of
eexcept that it is pure. Likewise, in
let make_incr_fun : (int code -> int code) -> (int -> int) code = fun body -> .<fun x -> x + .~(body .<x>.)>. let test1 = make_incr_fun (fun x -> sqr .<2+3>.) (* .<fun x_24 -> x_24 + ((2 + 3) * (2 + 3))>. *)
make_incr_fun bodyshould produce the code for an OCaml function. The result, shown in the comments, confirms our expectations.
Compositionality lets us think about programs in a modular way,
helping make sure the program result is the one we had in mind. We
ought to strive for compositional programs and libraries. The trick
however is selecting the `best' meaning for an expression. Earlier we
took as the meaning of a code generator the exact form of the
expression it produces. Under this meaning, compositionality requires
the absence of side-effects. Our
test1 is pure and so its result is
easy to predict: it has to be the code of a function whose body
contains two copies of the expression
2+3. Although the confidence
in the result is commendable, the result itself is not. Duplicate
expressions make the code large and inefficient: imagine something
2+3's place. Furthermore,
test1 does not
x and can be lifted out of the function's body -- so that
it can be computed once rather than on each application of the
function. In short, we would like the result of
test1 to look
let test_desired = .<let t = 2+3 in fun x_24 -> x_24 + (t * t)>.
One may remark that a sufficiently smart compiler should be able to
transform the code produced by
performing common subexpression elimination and invariant code
motion. Although the optimizations may be expected for the simple
test1, an expression more complex than
2+3 will thwart
them. First of all, its harder to see the commonality of two large
expressions. Second, the two optimizations are sound only if the
expression to move and share is pure. If it calls external functions
the compiler will not likely to be able to check the functions are
pure. Finally, if
2+3 were replaced with
read_config "foo" where
IO, the compiler shall not de-duplicate or move
such an expression. The DSL designer however may know that
read_config, albeit effectful, is safe to move and eliminate. Its
side-effect, reading a file, is irrelevant for the end users of
the program. Therefore, the DSL designer needs a way to explicitly perform the
optimizations like duplicate elimination and invariant code motion,
without relying on the compiler. After all, the aspiration of staging
is to let DSL designers express expert knowledge and use it for domain-specific optimizations -- in effect turning a general-purpose
compiler into a domain-specific optimization toolkit.
We change the meaning of code generators:
sqr e now means the
multiplication of two copies of
e or of the result of evaluating
e. The meaning is the generated code with possible let-expressions.
permits effectful generators, whose side-effect is let-insertion.
MetaOCaml could offer simplest such generators as built-ins. Adding
more built-ins to MetaOCaml -- just like adding more system calls to
an OS -- makes the system harder to maintain and ensure
correctness. Generally, what can be efficiently implemented at the
`user-level' ought to be done so. Let-insertion can be written as an
ordinary library, relying on the delimited control library
The first attempt at let-insertion takes only a few lines:
open Delimcc let genlet : 'w code prompt -> 'a code -> 'a code = fun p cde -> shift p (fun k -> .<let t = .~cde in .~(k .<t>.)>.) let with_prompt : ('w prompt -> 'w) -> 'w = fun thunk -> let p = new_prompt () in push_prompt p (fun () -> thunk p)The function
genlettakes a code expression and let-binds it -- at the place marked by
with_prompt. The two functions communicate via the so-called prompt. The evaluation sequence for a simple example below demonstrates the process:
with_prompt (fun p -> sqr (genlet p .<2+3>.)) --> (* evaluating genlet *) with_prompt (fun p -> sqr ( shift p (fun k -> .<let t = .~(.<2+3>.) in .~(k .<t>.)>.))) --> (* evaluating shift, capturing the continuation up to the prompt, binding it to k *) with_prompt (fun p -> let k = fun hole -> push_prompt p (fun () -> sqr hole) in .<let t = .~(.<2+3>.) in .~(k .<t>.)>.) --> with_prompt (fun p -> let k = fun hole -> push_prompt p (fun () -> sqr hole) in .<let t = 2+3 in .~(k .<t>.)>.) --> (* applying the captured continuation k *) with_prompt (fun p -> .<let t = 2+3 in .~(push_prompt p (fun () -> sqr .<t>.))>.) -->* (* evaluating (sqr .<t>.) *) with_prompt (fun p -> .<let t = 2+3 in .~(.<t * t>.)>.) -->* .<let t = 2+3 in t * t>.
.<2+3>. and let-bound
with_prompt was. The let-binding of
2+3 effectively eliminated
the expression duplication, saving us or the compiler trouble searching
for common subexpressions. The binding place can be
arbitrarily away from
with_prompt (fun p -> make_incr_fun (fun x -> sqr (genlet p .<2+3>.))) (* .<let t_17 = 2 + 3 in fun x_16 -> x_16 + (t_17 * t_17)>. *)thus realizing the invariant code motion, moving
2+3out of the function body. Code motion is desirable, but also can be dangerous:
with_prompt (fun p -> make_incr_fun (fun x -> sqr (genlet p .<.~x+3>.))) (* BEFORE N100: .<let t_17 = x_16 + 3 in fun x_16 -> x_16 + (t_17 * t_17)>. *) (* BER N101: exception pointing out that in .<.~x+3>. the variable x escapes its scope *)attempting to move out the expression that contains
xoutside its binding! Before, prior to BER N100, this attempt was successful, generating the shown code that exhibits the so-called scope extrusion. In the current version of BER MetaOCaml, the example type checks as before. However, its evaluation no longer succeeds. Rather, running the generator throws the exception with an informative message.
We have just seen the code generation with control effects, that let-insertion is highly desirable and highly dangerous, and that in the present MetaOCaml, it is finally safe. It is safe in the following sense: if the generator successfully finished producing the code, the result is well-typed and well-scoped.
Although the naive attempt to program let-insertion works and is now
safe, it is not convenient. One has to explicitly mark the
let-insertion place. When several places are marked, we or the user
have to choose. We want to automate such choices. We would like the
test1 to be the desired code
test_desired, with let-bindings. The
make_incr_fun -- which are programmed by the
DSL designer -- may be re-defined. However,
test1 should be left as
it was. It is written by the end user, who should not care or know
about let-insertion's taking place, let alone pass prompts
Recall that trying to insert
let at a wrong place raises an
exception. This dynamic exception lets us program
genlet so to try
the insertion at various places -- farther and farther up the call
chain -- until we get an exception. The best place to insert let is
the one that is farthest from
genlet while causing no exceptions.
We arrive at the following simplified interface for the let-insertion:
val genlet : 'a code -> 'a code val let_locus : (unit -> 'w code) -> 'w codeThe interface no longer mentions any prompts. The function
let_locusmarks possible places for inserting
genletchooses the best place among the marks and let-binds its argument expression there. This let-insertion is safe, and the safety is easy to prove: since this interface is implemented as an ordinary library with no compiler magic, by elaborating the naive implementation, the static guarantees of MetaOCaml hold. In particular, if an attempt is made to generate an ill-scoped code, an exception will be thrown. By contraposition, if no exception is thrown, the generated code has no scope extrusion. As an example, we re-define
make_incr_funprimitives for let-insertion. We fully reuse the earlier versions without re-implementing them:
sqrshould let-bind its argument and
make_incr_funsays that places before and after the function binder are good places for let-insertion.
let sqr e = sqr (genlet e) let make_incr_fun body = let_locus @@ fun () -> make_incr_fun @@ fun x -> let_locus @@ fun () -> body xWith these new
make_incr_fun, the very same
test1produces the desired code (shown in comments below):
let test1 = make_incr_fun (fun x -> sqr .<2+3>.) (* .<let t_17 = 2 + 3 in fun x_16 -> x_16 + (t_17 * t_17)>. *) let test2 = make_incr_fun (fun x -> sqr .<.~x + 3>.) (* .<fun x_18 -> x_18 + (let t_19 = x_18 + 3 in t_19 * t_19)>. *)The slightly modified
test2also inserts let, at the different, and safe place.
We have demonstrated for the first time the self-adjusting, safe and convenient let-insertion with static guarantees. The scope extrusion check meant to crash bad generators surprisingly helps implement good ones -- more convenient than those possible otherwise.
A simple illustration of let-insertion, its convenience and dangers (the example from the FLOPS 2014 talk)
The example uses the `traditional' implementation of let-insertion, which is not convenient.
The safe and convenient let-insertion
The annotated slides of the talk presented at FLOPS 2014 in Kanazawa, Japan on June 4, 2014.
Loop-invariant code motion with the convenient let-insertion (the main example in the FLOPS 2014 talk)
The example is the following simple expression.
\i -> (if i then 2 else 3) + 4To partially evaluate it, we first annotate all sub-expressions as static
S(known at compile time) or dynamic
D(known only at run-time). That is, we perform the `binding-time analysis' (BTA) and add binding-time annotations. For clarity, we show all the steps of the analysis. First, we see no applications to our function and hence do not statically know
\i_D -> (if i_D then 2 else 3) + 4The if-test with the dynamic condition is dynamic:
\i_D -> (if i_D then 2 else 3)_D + 4and the addition expression with a dynamic operand is dynamic:
\i_D -> ((if i_D then 2 else 3)_D + 4)_DAll sub-expressions are dynamic and hence there are no opportunities for partial evaluation.
Let's re-write the original expression in CPS:
\i k -> let k' = \v -> k (v+4) in if i then k' 2 else k' 3and do the binding-time analysis again. As before, function arguments,
k, must be dynamic:
\i_D k_D -> let k' = \v -> k_D (v+4) in if i_D then k' 2 else k' 3The variable
k'is static since its value, the
\v-> ...abstraction, is statically known. Constants are obviously static:
\i_D k_D -> let k'_S = \v -> k_D (v+4) in if i_D then k'_S 2_S else k'_S 3_SThere are now two applications of a statically known argument to a statically known function. We can do them statically, at the specialization, or `compile'-time, obtaining:
\i k -> if i then k 6 else k 7which is certainly an improvement: there are no longer any additions left for the run-time.
Bondorf (1992) showed the alternative to writing the source program in CPS. We should instead write the specializer (partial evaluator) in CPS, and skillfully massage the continuations to achieve binding-time improvements. A partial evaluator becomes harder to write, but the source programs may remain in the original, direct style, and still benefit from improved specialization. Lawall and Danvy (1994) showed that delimited control helps keep even the partial evaluator in direct style. Thus a specializer capable of improvements shown in this message becomes only a little more difficult to write.
Julia L. Lawall and Olivier Danvy: Continuation-Based Partial Evaluation
Conf. Lisp and Function Programming, 1994
However, the seemingly straightforward ranking operations ORDER BY and LIMIT are not supported efficiently, consistently or at all in subqueries. The SQL standard defines their behavior only when applied to the whole query. Language-integrated query systems do not support them either: naively extending ranking to subexpressions breaks the distributivity laws of UNION ALL underlying optimizations and compilation.
We present the first compositional semantics of ORDER BY and LIMIT,
which reproduces in the limit the standard-prescribed SQL behavior but
also applies to arbitrarily composed query expressions and preserves
the distributivity laws. We introduce
the relational calculus SQUR that includes ordering and subranging
and whose normal forms correspond to efficient, portable, subquery-free SQL.
Treating these operations as effects, we describe a type-and-effect
system for SQUR and prove its soundness. Our denotational semantics
leads to the provably correctness-preserving
normalization-by-evaluation. An implementation of SQUR thus
becomes a sound and efficient language-integrated query system
Slides of the talk presented at APLAS 2017, November 29, 2017. Suzhou, China
We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines.
With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.
Joint work with Kenichi Suzuki and Yukiyoshi Kameyama.
The paper published in the Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM), 2016, St. Petersburg, FL, USA, January 18-19, 2016, pp. 37-48
< http://logic.cs.tsukuba.ac.jp/~ken/quel/ >
The source code of the system. It is distributed under the MIT License.
The original Template Haskell (Haskell Workshop, 2002) is essentially untyped, with no guarantees that the generated code is any good, or even well-typed. Writing code generators was akin to programming in `dynamically typed' languages: although we can easily tell that the fully generated program is untypable (because it cannot be compiled), we cannot easily point out which part of the generator was to blame. The dissonance with the type discipline of Haskell is jarring. On the other hand, Template Haskell is very expressive: it generates not only expressions but also definitions, type declarations, classes and instances, etc.-- all sort of things that may appear in Haskell code. Designing a type system for such broad code generation was, and still is, a great challenge.
Against this background, Typed Template Haskell was introduced. It produces code only for expressions (and not for declarations, etc.). On the other hand, we can finally write typed generators. Typed Template Haskell is quite similar to the lambda-circle calculus or MetaOCaml, but restricted to two levels. Here is a sample typed generator.
t1 :: Q (TExp Int) t1 = do c1 <- [|| (1::Int) + 2 || ] c2 <- [|| 3 + $$(return c1) || ] return c2Like in MetaOCaml, the type of code values
TExpis indexed by the type of the expression to be generated.
[|| ... ||]is the typed antiquotation, corresponding to the brackets
.< ... >.of MetaOCaml. The brackets enclose, or quote, the code to generate, which must be a well-typed expression. The unquotation, or typed splice, is
$$( ... ). The splice is evaluated and must produce (a possibly open) code, which is then spliced into the enclosing typed quotation. If the splice appears outside a quotation, it is called top-level. In that case, the splice must produce a closed expression, which is inserted into the code being compiled. The example generator produces the following code:
3 GHC.Num.+ ((1 :: GHC.Types.Int) GHC.Num.+ 2)or, hiding the module qualifications,
3 + (1 + 2).
The problematic example is very short:
t2 :: Q (TExp Int) t2 = do r <- runIO $ newIORef undefined c1 <- [|| \x -> (1::Int) + $$(do xv <- [||x|| ] runIO $ writeIORef r xv return xv) || ] runIO $ readIORef rIt generates
x_0: just the single, unbound variable. It is clearly not typable. Such code cannot be compiled or spliced-in at the top level. The culprit is
runIO, which runs IO computations from within typed splices. It is highly desirable, for error handling and many other reasons. It is also responsible for breaking type soundness.
Once again we see the tension between safety and expressiveness. Without effects, we cannot generate all the code we want and cannot handle errors well. With side effects, maintaining type safety is a challenge. Fortunately, the challenge has recently been met, outside GHC and Typed Template Haskell.
The problem sprang to attention during the lecture by Simon Peyton Jones at
the First Metaprogramming Summer School in Cambridge, UK in August 2016 -- when
runIO was mentioned.
Tagless-Staged: Combinators for Impure yet Hygienic Code Generation
Type sound code generation with arbitrary side-effects
MetaOCaml resolves the tension between expressiveness and safety in a different, easy to implement way
The paper introduces the continuation+state monad for code generation with let-insertion, and briefly describes abstract interpretation for generating efficient code without its intensional analysis. These are the foundational techniques for typeful multi-staged (MetaOCaml) programming.
Joint work with Walid Taha, Kedar N. Swadi, and Emir Pasalic.
Dynamic Programming Benchmark: The corresponding MetaOCaml source code
< http://www.metaocaml.org/examples/dp/ >
Shifting the Stage: Staging with Delimited Control
An alternative, notationally more convenient approach relying on delimited control rather than monads
We demonstrate our extensions' use in specializing functor applications to eliminate its (currently large) overhead in OCaml. We explain the challenges that those extensions bring in and identify a promising line of attack. Unexpectedly, however, it turns out that we can avoid module generation altogether by representing modules, possibly containing abstract types, as polymorphic records. With the help of first-class modules, module specialization reduces to ordinary term specialization, which can be done with conventional staging. The extent to which this hack generalizes is unclear. Thus we have a question to the community: is there a compelling use case for module generation? With these insights and questions, we offer a starting point for a long-term program in the next stage of staging research.
Joint work with Jun Inoue and Yukiyoshi Kameyama.
D a => a -> awhere
Floating-like class, and produce a function of the same type that is the symbolic derivative of the former. The original function can be given to us in a separately compiled module with no available source code. The derived function is an ordinary numeric Haskell function and can be applied to numeric arguments -- or differentiated further. The Floating-like class
Dcurrently supports arithmetic and a bit of trigonometry. We also support partial derivatives.
test1f x = x * x + fromInteger 1 test1 = test1f (2.0::Float) -- 5.0 test1f' x = diff_fn test1f x test1' = test1f' (3.0::Float) -- 6.0 test1f'' x = diff_fn test1f' x test1'' = test1f'' (10.0::Float) -- 2.0The original function
test1fcan be evaluated numerically,
test1, or differentiated symbolically,
test1f'. The result is again an ordinary numeric function (i.e.,
2x), which can be applied to a numeric argument, see
test1f'can be differentiated further.
The original function is emphatically not represented as an algebraic data type -- it is a numeric function like
tan . Still, we are able to differentiate it symbolically (rather than numerically or automatically). The key insight is that Haskell98 supports a sort of reflection -- or, to be precise, type-directed partial evaluation and hence term reconstruction.
Our approach also shows off the specification of the differentiation rules via type classes (which makes the rules extensible) and the emulation of GADT via type classes. In 2006, we improved the approach by developing an algebraic simplifier and by avoiding any interpretative overhead.
Most optimal symbolic differentiation of compiled numeric functions
Our approach is reifying code into its `dictionary view', intensional analysis of typed code expressions, and staging so to evaluate under lambda. We improve the earlier, 2004 approach in algebraically simplifying the result of symbolic differentiation, and in removing interpretative overhead with the help of Template Haskell (TH). The computed derivative can be compiled down to the machine code and so it runs at full speed, as if it were written by hand to start with.
In the process, we develop a simple type system for a subset of TH code expressions (TH is, sadly, completely untyped) -- so that accidental errors can be detected early. We introduce a few combinators for the intensional analysis of such typed code expressions. We also show how to reify an identifier like
(+) to a
TH.Name -- by applying TH to itself. Effectively we obtain more than one stage of computation.
Our technique can be considered the inverse of the TH splicing operation: given a (compiled) numeric expression of a host program, we obtain its source view as a TH representation. The latter can be spliced back into the host program and compiled -- after, perhaps, simplification, partial evaluation, or symbolic differentiation. As an example, given the definition of the ordinary numeric,
Floating a => a->a function
test1f x = let y = x * x in y + 1 (which can be located in a separately compiled file), we reify it into a TH code expression, print it, differentiate it symbolically, and algebraically simplify the result:
*Diff> test1cp \dx_0 -> GHC.Num.+ (GHC.Num.* dx_0 dx_0) 1 *Diff> test1dp \dx_0 -> GHC.Num.+ (GHC.Num.+ (GHC.Num.* 1 dx_0) (GHC.Num.* dx_0 1)) 0 *Diff> test1dsp \dx_0 -> GHC.Num.+ dx_0 dx_0The output is produced by TH's pretty-printer. We can splice the result in a Haskell program as
$(reflectQC test1ds) 2.0and use it as ordinary, hand-written function
\x -> x+x.
The main implementation file, which defines the reification of code into TH representation, differentiation rules and algebraic simplification rules, all via the intensional analysis of the typed code. The file also includes many examples, including those of partial differentiation.
Running the splicing tests from Diff.hs. Due to the TH requirement, this code must be in a separate module.
This file introduces the type
Code a of typed TH code expressions. The (phantom) type parameter is the expression's type. The file defines combinators for building and analyzing these typed expressions.
Obtain the Name that corresponds to a top-level (Prelude-level) Haskell identifier by applying TH to itself.
The PEPM Symposium/Workshop series is about the theory and practice of program transformation understood broadly, ranging from program manipulation such as partial evaluation, to program analyses in support of program manipulation, to treating programs as data objects (metaprogramming). PEPM focuses on techniques, supporting theory, tools, and applications of the analysis and manipulation of programs. PEPM specifically stresses that each technique or tool of program manipulation should have a clear, although perhaps informal, statement of desired properties, along with an argument how these properties could be achieved. The papers included in this special issue reflect the entire scope of PEPM, its interplay of theory and practice, and its stress on rigor and clarity.
Your comments, problem reports, questions are very welcome!
Converted from HSXML by HSXML->HTML