nor
).
The sort of reasoning we wish to perform on the model is finding out the probability distribution of some of its random variables. For example, we can work out from the model that the probability of the grass being wet is 60.6%. Such reasoning is called probabilistic inference. Often we are interested in the distribution conditioned on the fact that some random variables have been observed to hold a particular value. In our example, having observed that the grass is wet, we want to find out the chance it was raining on that day. For background on the statistical modeling and inference, the reader is referred to Pearl's classic text and to Getoor and Taskar's collection.
Lise Getoor and Ben Taskar.
Introduction to Statistical Relational Learning
MIT Press, 2007.
David Wingate, Andreas Stuhlmueller and Noah D. Goodman:
Lightweight Implementations of Probabilistic Programming Languages
Via Transformational Compilation
AISTATS 2011. Revision 3. February 8, 2014
PPS2017-poster.pdf [98K]
Poster at PPS 2017
Just as importance of ML increases, the scalability problem of developing ML applications becomes more and more pressing. Currently, applying a non-trivial machine learning task requires expertise both in the modeled domain as well as in probabilistic inference methods and their efficient implementations on modern hardware. The tight coupling between the model and the efficient inference procedure hinders making changes and precludes reuse. When the model changes significantly, the inference procedure often has to be re-written from scratch.
Probabilistic programming -- which decouples modeling and inference and lets them be separately written, composed and reused -- has the potential to make it remarkably easier to develop new ML tasks and keep adjusting them, while increasing confidence in the accuracy of the results. That promise has been recognized by the U.S. Defense Advanced Research Projects Agency (DARPA), which has initiated the broad program ``Probabilistic Programming for Advancing Machine Learning (PPAML)'', started in March 2013 and running through 2017.
Developing the potential of probabilistic programming requires applying the recent insights from programming language (PL) research such as supercompilation from metaprogramming. A surprising challenge is correctness: it turns out that a number of well-known and widely-used libraries and systems such as STAN may produce patently wrong results on some problems (as well-demonstrated in Hur et al., FSTTCS 2015).
We proposed a discussion-heavy workshop to promote the evident and growing interest of the developers of ML/probabilistic domain-specific languages in program generation and transformation, and programming language researchers in ML applications. It was to discuss the not-yet-answered challenges and the issues raised at the POPL-affiliated ``Probabilistic Programming Semantics'' workshops, but in more depth and detail. The meeting took place at the Shonan Village Center, Kanagawa, Japan on May 22-25, 2018.
The following questions were raised repeatedly during the meeting:
shonan-113-report.pdf [158K]
The final report