halt
Overloading resolution is one immediate application. Here, collaborative reference can resolve the tension between fancy polymorphism and the need to write long type annotations everywhere. The user will submit a source code fragment with few or no annotations. If the compiler gets stuck -- e.g., because of ambiguous overloading -- it will ask the user for a hint. At the successful conclusion of the dialogue, the compiler saves all relevant context, in the form of inserted type annotations or as a proof tree attached to the source code. When revisiting the code later on, the same or a different programmer may ask the compiler questions about the inferred types and the reasons for chosen overloadings. We argue that such an interactive overloading resolution should be designed already in the type system (e.g., using oracles).
Joint work with Chung-chieh Shan.
Post systems (think of generative grammars) are string transformations based on repeated, perhaps context-sensitive, substitutions -- replacements of substrings by other strings. The substitutions are expressed via a finite sequence of rules (grammar productions). The order of applying the rules is not defined. Normal Markov algorithms, true to the name, are a restricted, `normal' form of rule-based substitution systems. Substitutions are performed in the strictly defined order and regardless of the surrounding context. The whole string transformation process runs deterministically and can be done by a simple mechanism, a `machine'. Normal Markov algorithms deserve to be called `algorithms'. (There is a play of words missing in English: `algorithm' is `алгоритм' in Russian. Markov called his system `алгорифм'.)
Normal Markov algorithm is hence a machine that repeatedly rewrites
the given input string according to an ordered sequence of rewriting
rules. Each rule is a pair of strings: the source src
and the
replacement rplc
, each of which may be empty. Some rules may be
marked as terminal. The work cycle of the machine is: find the
first rule in the sequence whose src
appears as a substring
of the input string. Replace the leftmost occurrence of the
src
with the rplc
of the rule. If the rule is terminal,
stop. Otherwise, start the cycle anew, the just rewritten string
being the input. If no rule applies, stop.
As an example, the following sequence of rules, written as an OCaml
array, converts a big-endian binary number (a string of 0
s and 1
s)
into the string of vertical bars encoding the same number in unary.
let bin_to_unary = [| rule "1" "0|"; rule "|0" "0||"; rule "0" ""; |]For example,
run bin_to_unary "110"
, using the OCaml code below,
produces the string "||||||"
of six bars. The code also prints
the trace of fired rules and their intermediate results, showing the working
of this clever algorithm.
Curious readers may want to work out how the following rules
let gcd = [| rule "aA" "Aa"; rule "a#a" "A#"; rule "a#" "#B"; rule "B" "a"; rule "A" "C"; rule "C" "a"; rule "#" ""; |]manage to rewrite the string made of i letters
a
followed by
#
and more j letters a
into the string of k letters
a
where k=gcd(i,j).
Markov has shown that all known classes of string transformations performed by a finite sequence of rewriting rules can be expressed as a normal algorithm (that is, `normalizable'). This let him pose that all algorithms are normalizable. He justified his thesis more than either Turing or Church did for the corresponding Church-Turing thesis. Here is how V.M.Glushkov (cited from the English translation) describes this justification:
The validity of this [Markov normalization] principle is based first of all on the fact that all the algorithms known at the present time are normalizable. Since in the course of the long history of the development of the exact sciences a considerable number of different algorithms have been devised, this statement is convincing in itself.In actuality it is even more convincing. We can show that all the methods known at the present time for the composition of algorithms which make it possible to construct new algorithms from the already known ones do not go beyond the limits of the class of normalizable algorithms. ...
However this is not all. A whole series of scientists have undertaken special attempts to construct algorithms of a more general form and all these attempts have not been carried beyond the limits of the class of normalizable algorithms.
I'd like to clear a common misunderstanding about the exact period when A.A.Markov developed his theory of algorithms. All the English sources that I have seen claim that Markov Algorithms was the work of 1960s -- most likely, based on the publication date of the very brief English translation of Markov's work. However, the theory was published in Russian quite a bit earlier. According to the bibliography of A.A.Markov (referenced below), the first paper `Theory of algorithms' appeared in Steklov's Mathematics Institute journal in 1951. Three years later, in 1954, A.A.Markov published a 376-page book with the same title. He was working in the area of string algorithms already in 1947, when he published his famous result: undecidability of word problems in semigroups.
<https://logic.pdmi.ras.ru/Markov/list.html>
Complete list of publications by A.A.Markov
<https://logic.pdmi.ras.ru/Markov/index.html>
Андрей Андреевич Марков (1903-1979)
This comprehensive web site
also has rare family photos (showing A.A.Markov's
father, the inventor of Markov chains) and poems written by A.A.Markov
Andrey Andreevich Markov 1960. The Theory of Algorithms. American Mathematical Society Translations, series 2, 15, 1-14.
В.М.Глушков. Введение в Кибернетику
Издательство Академии Наук Украинской ССР.
Киев, 1964. 324 стр.
There is an English translation:
V.M.Glushkov. Introduction to Cybernetics
U.S. Air Force Systems Command. Wright-Patterson Air Force
Base. Ohio. Translation division. Foreign technology division.
FTD-TT-65-942/1+2. AFLC-WPAFB-Mar 66 72. Feb 17, 1966.
V.F.Turchin. The Concept of a Supercompiler
ACM Transactions on Programming Languages and Systems, 1986, v8, 292--325.
doi:10.1145/5956.5957
Sec 4 of the paper is probably the best presentation of Refal in
English, along with many examples and comparison with Lisp and Prolog.
Turchin emphasized that Refal is a pure functional language.
FLOPS brings together practitioners, researchers and implementers of declarative programming, to discuss mutually interesting results and common problems: theoretical advances, their implementations in language systems and tools, and applications of these systems in practice. The scope includes all aspects of the design, semantics, theory, applications, implementations, and teaching of declarative programming. FLOPS specifically aims to promote cross-fertilization between theory and practice and among different styles of declarative programming. This wide scope and the interplay of theory and practice is reflected in the papers of this special issue.
We are grateful to the authors for preparing the extended versions of their papers and promptly addressing reviewers' comments. We deeply thank the reviewers for all the work they have done to evaluate and improve the submissions.
Joint editing with Andy King, University of Kent, UK
FLOPS-poster.pdf [125K]
The conference poster
It appears Z3 was farther ahead than we might have realized. Z3 had no branch instruction and instead relied on predicated execution. The latter technique -- implemented in the most recent upscale CPUs -- is considered to be ``one of the most significant [recent] advances in the history of computer architecture and compiler design.'' It seems that advance is not recent at all: it was invented by Konrad Zuse back in 1938. The letter to Dr.Dobbs J. Editors gives more details.
Zuse-accolades.txt [6K]
The letter to the editor regarding the above article.
It was published in Dr. Dobbs J., December 2000, p. 10
Zuse and Intel: A letter from reader Ben Laurie
Dr. Dobbs J., August 2001, p. 12
Ben Lauri points out that Acorn's ARM chip has had the predicated
execution feature more than a decade ago (but far after Z3)
Konrad Zuse Internet Archive
<http://www.zib.de/zuse/English_Version/index.html>
Algebraic effects and effect handlers are becoming an increasingly popular approach for expressing and composing computational effects. There are implementations of algebraic effects and effect handlers in Clojure, F#, Haskell, Idris, Javascript, OCaml PureScript, Racket, Scala, Scheme, Standard ML, and even C. There are full-fledged languages built around effects such as Eff, Frank, Links, Koka, and Multicore OCaml. Moreover, there is growing interest from industry in effect handlers. For instance, Facebook's React library for JavaScript UI programming is directly inspired by effect handlers, and Uber's recently released Pyro tool for probabilistic programming and deep learning relies on effect handlers. Such interest arises from the ease of combining in the same program independently developed components using algebraic effects and effect handlers.
The increased adoption and use of algebraic effects and effect handlers reveal and make pressing three main problems: reasoning, performance, and typing. Given the complexity of these problems and their importance, we believed the face-to-face meeting of main community representatives would promote their solution.
We identified five specific application areas to be discussed at the meeting in the context of the three main problem areas:
To promote mutual understanding, we have planned for the workshop to have substantial time available for discussion. We emphasized tutorials, brainstorming, and working-group sessions, rather than mere conference-like presentations.
The seminar was organized together with Sam Lindley, Gordon Plotkin and Nicolas Wu. The Shonan seminar series is sponsored by Japan's National Institute of Informatics (NII).
shonan-handlers-report.pdf [221K]
The final report
A follow-up article showed various ways to simulate self-referential values and lazy computation in ML (OCaml).
Furthermore, we can create DAGs and general cyclic data structures even in a strict pure functional language. The following article ``On parent pointers in SXML trees'' demonstrates different ways of constructing tree-like data structures with forward and backward pointers. Some of the data structures are general graphs with directed cycles, some of them are DAGs. And yet some of them are genuine trees. How can a recursive data structure have forward and backward pointers and yet only a single path between any two nodes? How can we do that in a strict pure language? The article and the accompanying source code explain. The paradoxical tree is built by a procedure add-thunk-parent-pt, which is defined in section 'Solution 3' of the source code. The procedure is deceptively simple. It took 21 lines of Scheme code to implement the algorithm and 26 (full-length) lines of comments to explain it.
Unitary type as a self-referential set
Pure-functional transformations of cyclic graphs and the Credit Card Transform
Henry G. Baker: Equal Rights for Functional Objects or, The More Things Change, The More They Are the Same
On parent pointers in SXML
trees
The article also explains practical benefits of creating and
manipulating cyclical structures in the side-effect-free manner.
number-of-targets
times number-of-platforms
to just
number-of-targets
plus number-of-platforms
.
The language of GNU make
is indeed functional, complete with
combinators (map
and filter
), applications and anonymous
abstractions. Yes, GNU make
supports lambda-abstractions. The following
is one example from the Makefile in question: it is a rule to build a
test target for the SCM Scheme system. The list of source code files
and the name of the target/root-test-file are passed as
two arguments of the rule:
make-scmi= scm -b -l $(LIBDIR)/myenv-scm.scm \ $(foreach file,$(1),-l $(LIBDIR)/$(file)) \ -l $(2).scmThe rule returns the OS command to interpret or compile the target. It is to be invoked as
$(call make-scmi,util.scm catch-error.scm,vmyenv)As in TeX, the arguments of a function are numbered (it is possible to assign them meaningful symbolic names, too). Makefile's
foreach
corresponds to Scheme's map
. The comparison
with the corresponding Scheme code is striking:
(define make-scmi (lambda (arg1 arg2) `(scm -b -l ,(mks LIBDIR '/ 'myenv-scm.scm) ,@(map (lambda (file) `(-l ,(mks LIBDIR '/ file))) arg1) -l ,(mks arg2 '.scm))))
Makefile [7K]
The Makefile in question
Chapter 'Functions' from make
info pages
In a more recent discussion, Neelakantan Krishnaswami
introduced the function choose
function as an indispensable part of
an API for Set, letting us reason about sets
algebraically and equationally.
Neelakantan Krishnaswami: Re: FP-style map over set
The message posted on the newsgroup comp.lang.functional
on Mon, 28 Nov 2005 23:45:27 +0000 (UTC)
sendmail.cf [37K]
The sendmail configuration file (sendmail.cf) augmented with rules to
implement the Turing machine. This file also defines two sample Turing
machines.
- turing_machine((0,["#","a","a","a","a","a","a"],"#",[]),prog_decision); val it = (~1,["#","Yes"],"#",["#","#","#","#","#"]) : int * string list * string * string listshows that starting the decision program (which decides if a string contains an even number of characters) with the tape
#aaaaaa###....
and the head positioned at the beginning
of the tape, leaves # "YES" ####...
on the tape after
the machine halts (by reaching the state -1). Thus, the machine has
decided the input string contained an even number of characters.