previous   next   up   top

Staging, Program Generation, and Meta-Programming

 


 

A monadic approach for avoiding code duplication when staging memoized functions

[The Abstract of the paper]
Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions are staged. Without addressing this problem, it would be impossible to effectively stage dynamic programming algorithms. Intuitively, direct staging undoes the effect of memoization. To solve this problem once and for all, and for any function that uses memoization, we propose a staged monadic combinator library. Experimental results confirm that the library works as expected. Preliminary results also indicate that the library is useful even when memoization is not used.

The paper introduces the continuation+state monad for code generation with let-insertion, and briefly describes abstract interpretation for generating efficient code without its intensional analysis. These are the foundational techniques for typeful multi-staged (MetaOCaml) programming.

Joint work with Walid Taha, Kedar N. Swadi, and Emir Pasalic.

Version
The current version is January 2006.
References
Proceedings of the 2006 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation (PEPM), 2006, Charleston, South Carolina, USA, January 9-10, 2006, pp. 160-169.
< http://www.cs.rice.edu/~taha/publications/conference/pepm06.pdf >

Dynamic Programming Benchmark: The corresponding MetaOCaml source code
< http://www.metaocaml.org/examples/dp/ >

Shifting the Stage: Staging with Delimited Control
An alternative, notationally more convenient approach relying on delimited control rather than monads

 

Tagless-Staged: Combinators for Impure yet Hygienic Code Generation

We present a combinator library to write (Haskell) programs that generate code in a subset of Haskell, OCaml, or other target language. The library embeds an extensible typed higher-order domain-specific language. It statically assures not only that all generated code is well-formed and well-typed but also that all generated variables are bound as expected. Such assurances are crucial for code generators to be written by domain experts rather than compiler writers; after all, the most profitable optimizations are domain-specific ones. These static guarantees are maintained in the presence of arbitrary effects, including IO. Effects are indispensable in code generators, to report failures, to generate alternatives to choose the fastest from, to introduce auxiliary variables, to interchange loops, etc.

Our framework is not a an experimental language with a new syntax and uncertain future. Rather, it is a regular, not a bleeding edge, library in mature Haskell. It is modular and extensible: one can add to the target language more features and constants or change it to be typed or untyped, first- or higher-order. Polymorphism over generated environments makes generators composable. The higher-order abstract syntax used in the library makes target functions and variables human-readable. Our library may be regarded as `staged Haskell'.

There are many code generation frameworks with one or the other of the features. Ours has all of them. Template Haskell, among other systems, falls short, assuring -- dynamically rather than statically -- the well-typedness of only the final code. Furthermore, although the generated code, after a type-check, is well-typed, it may still be ill-scoped.

The static types of our generator expressions not only ensure that a well-typed generator produces well-typed and well-scoped code. They also express the lexical scopes of generated binders and prevent mixing up variables with different scopes. For the first time we demonstrate, in an embedded domain-specific language, statically safe and well-scoped loop interchange and constant factoring from arbitrarily nested loops.

Joint work with Yukiyoshi Kameyama and Chung-chieh Shan.

Version
The current version is January 2014.
References
beyond.pdf [267K]
Combinators for Impure yet Hygienic Code Generation
The paper published in the Proceedings of the ACM SIGPLAN 2014 Workshop on Partial Evaluation and Program Manipulation (PEPM 14), pp. 3-14. January 20-21, 2014, San Diego, CA, USA.
doi:10.1145/2543728.2543740

beyond-talk.pdf [193K]
The annotated slides of the talk presented at PEPM 14 on January 20, 2014 in San Diego, CA, USA.

TSCore.hs [27K]
The core of the code generation library

TSCPST.hs [17K]
Code generation with control effects reaching beyond the closest binder. Far-reaching let-insertion. The implementation of the CPSA applicative hierarchy.

Examples and Sample code

 

The MetaOCaml files: Status report and research proposal

[The Abstract]
Custom code generation is the leading approach to reconciling generality with performance. MetaOCaml, a dialect of OCaml, is the best-developed way today to write custom code generators and assure them type-safe across multiple stages of computation. Nevertheless, the continuing interest from the community has yet to result in a mature implementation of MetaOCaml that integrates cleanly with OCaml's type checker and run-time system. Even in theory, it is unclear how staging interacts with effects, polymorphism, and user-defined data types.

We report on the status of the ongoing MetaOCaml project, focusing on the gap between theory and practice and the difficulties that arise in a full-featured staged language rather than an idealized calculus. We highlight foundational problems in type soundness and cross-stage persistence that demand investigation. We also suggest a lightweight implementation of a two-stage dialect of OCaml, as syntactic sugar.

Joint work with Chung-chieh Shan.

Version
The current version is September, 2010.
References
metaocaml-files.pdf [95K]
The extended abstract published in the informal proceedings of the 2010 ACM SIGPLAN Workshop on ML.

Chung-chieh Shan. Slides of the talk at the ML workshop. Baltimore, MD, September 26, 2010.
< http://www.cs.rutgers.edu/~ccshan/metafx/metaocaml-slides.pdf >

BER MetaOCaml: a streamlined version of MetaOCaml

 

MetaScheme, or untyped MetaOCaml

We implement the four MetaOCaml special forms -- bracket, escape, cross-stage persistence (aka `lift'), and run -- in R5RS Scheme. A Scheme system thus becomes untyped MetaOCaml, in which we can write and run code with an arbitrary number of stages. We point out how distinct the staging annotations are from Scheme's quasiquotation, despite superficial similarity. Our implementation of cross-stage persistent (CSP) values closely corresponds to the one in MetaOCaml and so helps explain the latter.

Here is a sample MetaOCaml code and its Scheme translation

     # let eta = fun f -> .<fun x -> .~(f .<x>.)>.
       in .<fun x -> .~(eta (fun y -> .<x + .~y>.))>.
     - : ('a, int -> int -> int) code = .<fun x_1 -> fun x_2 -> (x_1 + x_2)>.
     
      (let ((eta (lambda (f) (bracket (lambda (x) (escape (f (bracket x))))))))
       (bracket (lambda (x) (escape (eta (lambda (y) (bracket (+ x (escape y)))))))))
Translating MetaOCaml code into Scheme seems trivial: code values are like S-expressions, MetaOCaml's bracket is like quasiquote, escape is like unquote, and `run' is eval. If we indeed replace bracket with the quasiquote and escape with the unquote in the above Scheme code and then evaluate it, we get the S-expression (lambda (x) (lambda (x) (+ x x))) , which is a wrong result, quite different from the code expression .<fun x_1 -> fun x_2 -> (x_1 + x_2)>. produced by MetaOCaml. The latter is the code of a curried function that adds two integers. The S-expression produced by the naive Scheme translation represents a curried function that disregards the first argument and doubles the second. This demonstrates that the often mentioned `similarity' between the bracket and the quasiquote is flawed. MetaOCaml's bracket respects alpha-equivalence; in contrast, Scheme's quasiquotation, being a general form for constructing arbitrary S-expressions (not necessarily representing any code), is oblivious to the binding structure.

Our implementation still uses S-expressions for code values (so we can print them and eval-uate them) and treats escape as unquote. To maintain the hygiene, we need to make sure that every run-time evaluation of a bracket form such as (bracket (lambda (x) x)) gives '(lambda (x_new) x_new) with the unique bound variable x_new . Two examples in the source code demonstrate why static renaming of manifestly bound identifiers is not sufficient. We implement the very clever suggestion by Chung-chieh Shan and represent a staged expression such as .<(fun x -> x + 1) 3>. by the sexp-generating expression `(,(let ((x (gensym))) `(lambda (,x) (+ ,x 1))) 3) which evaluates to the S-expression ((lambda (x_1) (+ x_1 1)) 3) . Thus bracket is a complex macro that transforms its body to the sexp-generating expression, keeping track of the levels of brackets and escapes. The macro bracket is implemented as a CEK machine with the defunctionalized continuation. In our implementation, the Scheme translation of the eta-example yields the S-expression (lambda (g6) (lambda (g8) (+ g6 g8))) . Just as in MetaOCaml, the result represents the curried addition.

CSP poses another implementation problem. In MetaScheme we can write the MetaOCaml expression .<fun x -> x + 1>. as (bracket (lambda (x) (+ x 1))) , which yields an S-expression (lambda (g1) (+ g1 1)) . When we pass this code to eval, the identifier + will be looked up in the environment of eval, which is generally different from the environment that was in effect when the original bracket form was evaluated. That might not look like much of a difference since + is probably bound to the same procedure for adding numbers in either environment. This is no longer the case if we take the following MetaOCaml code and its putative MetaScheme translation:

     let test = let id u = u in
                   .<fun x -> id x>.
      
     (define test (let ((id (lambda (u) u)))
                    (bracket (lambda (x) (id x)))))
The latter definition binds test to the S-expression (lambda (x) (id x)) that contains a free identifier id , unlikely to be bound in the environment of eval. Our code values therefore should be `closures', being able to include values that are (possibly locally) bound in the environment when the code value was created. Incidentally, the problem of code closures closed over the environment of the generator also appears in syntax-case macros. R6RS editors made a choice to prohibit the occurrence of locally-bound identifiers in the output of a syntax-case transformer.

MetaOCaml among other staged systems does permit the inclusion of values from the generator stage in the generated code. Such values are called CSP; they are evident in the output of the MetaOCaml interpreter for the above test:

     val test : ('a, 'b -> 'b) code =
     .<fun x_1 -> (((* cross-stage persistent value (as id: id) *)) x_1)>.
In MetaScheme, we have to write CSP explicitly: (% e) , which is often called `lift'.
     (define test
       (let ((id (lambda (u) u)))
         (bracket (lambda (x) ((% id) x)))))
One may think that such a lifting is already available in Scheme: (define (lift x) (list 'quote x)) . Although this works in the simple cases of x being a number, a string, etc., it is neither universal nor portable: attempting this way to lift a closure or an output-port could be an error. According to R5RS, the argument of a quotation is an external representation of a Scheme datum. Closures, for example, are not guaranteed to have an external representation. For portability, we implement CSP via a reference in a global array, taking advantage of the fact that both the index (a number) and the name of the array (an identifier) have external representations and hence are trivially liftable by quotation. This is precisely the mechanism used by the current version of MetaOCaml.

The source code contains many more examples of the translation of MetaOCaml code into MetaScheme -- including the examples with many stages and CSP.

Joint work with Chung-chieh Shan.

Version
The current version is 1.4, Aug 20, 2008.
References
meta-scheme.scm [15K]
The complete implementation with many comments. It should work with any R5RS Scheme system. The code has been tested with Scheme 48 1.3, SCM 5e1, and Petite Chez Scheme 6.0a.

 

Type(class)-directed symbolic differentiation

We show how to take an ordinary numeric Haskell function, e.g., a term of the type D a => a -> a where D is a Floating-like class, and produce a function of the same type that is the symbolic derivative of the former. The original function can be given to us in a separately compiled module with no available source code. The derived function is an ordinary numeric Haskell function and can be applied to numeric arguments -- or differentiated further. The Floating-like class D currently supports arithmetic and a bit of trigonometry. We also support partial derivatives.
     test1f x = x * x + fromInteger 1
     test1 = test1f (2.0::Float)      -- 5.0
      
     test1f' x = diff_fn test1f x
     test1' = test1f' (3.0::Float)    -- 6.0
     
     test1f'' x = diff_fn test1f' x
     test1'' = test1f'' (10.0::Float) -- 2.0
The original function test1f can be evaluated numerically, test1 , or differentiated symbolically, test1f' . The result is again an ordinary numeric function (i.e., 2x), which can be applied to a numeric argument, see test1' . Alternatively, test1f' can be differentiated further.

The original function is emphatically not represented as an algebraic data type -- it is a numeric function like sin or tan . Still, we are able to differentiate it symbolically (rather than numerically or automatically). The key insight is that Haskell98 supports a sort of reflection -- or, to be precise, type-directed partial evaluation and hence term reconstruction.

Our approach also shows off the specification of the differentiation rules via type classes (which makes the rules extensible) and the emulation of GADT via type classes. In 2006, we improved the approach by developing an algebraic simplifier and by avoiding any interpretative overhead.

Version
The current version is 1.1, Nov 24, 2004.
References
differentiation.lhs [11K]
The complete literate Haskell code with more examples, including those of partial differentiation. It was originally posted as Typeful symbolic differentiation of compiled functions on the Haskell mailing list on Wed Nov 24 04:05:12 EST 2004.

Most optimal symbolic differentiation of compiled numeric functions

 

Most optimal symbolic differentiation of compiled numeric functions

We show symbolic differentiation of a wide class of numeric functions without imposing any interpretative overhead. The functions to differentiate can be given to us in a compiled form (in .hi files); their source code is not needed. We produce a (compiled, if needed) function that is an exact, algebraically simplified analytic derivative of the given function.

Our approach is reifying code into its `dictionary view', intensional analysis of typed code expressions, and staging so to evaluate under lambda. We improve the earlier, 2004 approach in algebraically simplifying the result of symbolic differentiation, and in removing interpretative overhead with the help of Template Haskell (TH). The computed derivative can be compiled down to the machine code and so it runs at full speed, as if it were written by hand to start with.

In the process, we develop a simple type system for a subset of TH code expressions (TH is, sadly, completely untyped) -- so that accidental errors can be detected early. We introduce a few combinators for the intensional analysis of such typed code expressions. We also show how to reify an identifier like (+) to a TH.Name -- by applying TH to itself. Effectively we obtain more than one stage of computation.

Our technique can be considered the inverse of the TH splicing operation: given a (compiled) numeric expression of a host program, we obtain its source view as a TH representation. The latter can be spliced back into the host program and compiled -- after, perhaps, simplification, partial evaluation, or symbolic differentiation. As an example, given the definition of the ordinary numeric, Floating a => a->a function test1f x = let y = x * x in y + 1 (which can be located in a separately compiled file), we reify it into a TH code expression, print it, differentiate it symbolically, and algebraically simplify the result:

     *Diff> test1cp
     \dx_0 -> GHC.Num.+ (GHC.Num.* dx_0 dx_0) 1
     *Diff> test1dp
     \dx_0 -> GHC.Num.+ (GHC.Num.+ (GHC.Num.* 1 dx_0) (GHC.Num.* dx_0 1)) 0
     *Diff> test1dsp
     \dx_0 -> GHC.Num.+ dx_0 dx_0
The output is produced by TH's pretty-printer. We can splice the result in a Haskell program as $(reflectQC test1ds) 2.0 and use it as ordinary, hand-written function \x -> x+x .
Version
The current version is 1.1, Dec 3, 2006.
References
Diff.txt [8K]
The message with the explanation of optimality and more examples. It was originally posted as Jones-optimal, typed, symbolic differentiation of (compiled) functions on the Haskell mailing list on Sun, 3 Dec 2006 17:56:27 -0800 (PST).

Diff.hs [9K]
The main implementation file, which defines the reification of code into TH representation, differentiation rules and algebraic simplification rules, all via the intensional analysis of the typed code. The file also includes many examples, including those of partial differentiation.

DiffTest.hs [<1K]
Running the splicing tests from Diff.hs. Due to the TH requirement, this code must be in a separate module.

TypedCode.hs [5K]
This file introduces the type Code a of typed TH code expressions. The (phantom) type parameter is the expression's type. The file defines combinators for building and analyzing these typed expressions.

TypedCodeAux.hs [2K]
Obtain the Name that corresponds to a top-level (Prelude-level) Haskell identifier by applying TH to itself.

 

Why a program in CPS specializes better

It has become folk knowledge that writing a program in the continuation-passing style (CPS) helps a partial evaluator do more operations at specialization time and hence produce a better specialized program. The reason for that has not as widely spread. The following example, distilled from the papers by Bondorf and Lawall & Danvy, might help.

The example is the following simple expression.

     	\i -> (if i then 2 else 3) + 4
To partially evaluate it, we first annotate all sub-expressions as static S (known at compile time) or dynamic D (known only at run-time). That is, we perform the `binding-time analysis' (BTA) and add binding-time annotations. For clarity, we show all the steps of the analysis. First, we see no applications to our function and hence do not statically know i:
     	\i_D -> (if i_D then 2 else 3) + 4
The if-test with the dynamic condition is dynamic:
     	\i_D -> (if i_D then 2 else 3)_D + 4
and the addition expression with a dynamic operand is dynamic:
     	\i_D -> ((if i_D then 2 else 3)_D + 4)_D
All sub-expressions are dynamic and hence there are no opportunities for partial evaluation.

Let's re-write the original expression in CPS:

     \i k -> let k' = \v -> k (v+4) in
             if i then k' 2 else k' 3
and do the binding-time analysis again. As before, function arguments, i and k, must be dynamic:
     \i_D k_D -> let k' = \v -> k_D (v+4) in
                 if i_D then k' 2 else k' 3
The variable k' is static since its value, the \v-> ... abstraction, is statically known. Constants are obviously static:
     	\i_D k_D -> let k'_S = \v -> k_D (v+4) in
     	            if i_D then k'_S 2_S else k'_S 3_S
There are now two applications of a statically known argument to a statically known function. We can do them statically, at the specialization, or `compile'-time, obtaining:
     	\i k -> if i then k 6 else k 7
which is certainly an improvement: there are no longer any additions left for the run-time.

Bondorf (1992) showed the alternative to writing the source program in CPS. We should instead write the specializer (partial evaluator) in CPS, and skillfully massage the continuations to achieve binding-time improvements. A partial evaluator becomes harder to write, but the source programs may remain in the original, direct style, and still benefit from improved specialization. Lawall and Danvy (1994) showed that delimited control helps keep even the partial evaluator in direct style. Thus a specializer capable of improvements shown in this message becomes only a little more difficult to write.

Version
The current version is January 2012.
References
Anders Bondorf: Improving Binding Times Without Explicit CPS-Conversion
Conf. Lisp and Function Programming, 1992
The seminal paper on binding-time improvements.

Julia L. Lawall and Olivier Danvy: Continuation-Based Partial Evaluation
Conf. Lisp and Function Programming, 1994

 

Editorial: PEPM2012 Special Issue of Higher-Order and Symbolic Computation

The five articles in this special issue are the extended, journal versions of the papers first presented at the ACM SIGPLAN 2012 Workshop on Partial Evaluation and Program Manipulation. PEPM'12 took place in January 2012 in Philadelphia, Pennsylvania, USA. Out of 19 papers presented at PEPM 2012, the program committee selected six and invited their authors to submit an extended and improved version to the Special issue. Each submission was reviewed by three reviewers according to the rigorous journal standards. Five have been recommended for publication.

The PEPM Symposium/Workshop series is about the theory and practice of program transformation understood broadly, ranging from program manipulation such as partial evaluation, to program analyses in support of program manipulation, to treating programs as data objects (metaprogramming). PEPM focuses on techniques, supporting theory, tools, and applications of the analysis and manipulation of programs. PEPM specifically stresses that each technique or tool of program manipulation should have a clear, although perhaps informal, statement of desired properties, along with an argument how these properties could be achieved. The papers included in this special issue reflect the entire scope of PEPM, its interplay of theory and practice, and its stress on rigor and clarity.

Version
The current version is July 2013.
References
PEPM2012-Special-issue-editorial.pdf [43K]
The full Editorial describing the five papers
Higher-Order and Symbolic Computation. August-September 2013. Special issue with selected papers from PEPM 2012. Edited together with Julia Lawall and Simon Thompson
doi:10.1007/s10990-013-9092-0


Last updated March 9, 2014

This site's top page is http://okmij.org/ftp/

oleg-at-pobox.com or oleg-at-okmij.org
Your comments, problem reports, questions are very welcome!

Converted from HSXML by HSXML->HTML