As per the discussion at the end of the meeting yesterday: If you have open questions you’d like to share, please do so in comments here. If there’s enough interest, perhaps these could be collated into a more formal document.
Thanks to everyone (presenters, participants, POPL organizers, volunteers…) for making today’s workshop such a lively exchange of knowledge and human connections. Please post slides, posters, preprints, questions, comments, suggestions here on our blog.
Maybe it’s worth having a workshop like this again next year. We would love to hear your suggestions as to how the next workshop, if any, could better foster collaboration and establish common ground between programming-language and machine-learning researchers. Your suggestions might address scheduling, location, content, submission process, format, etc.
Thanks for posting your thoughts here!
A universal probabilistic programming language consists of a general-purpose language extended with two probabilistic features: the ability to make random (probabilistic) choices and the ability to make observations. For expressiveness and efficiency, it is useful to consider observations that have nonnegative real likelihoods rather than simple boolean truth values. A program in such a language represents a probabilistic process; the chance of producing a particular answer is determined by the random choices made along the way and the likelihoods of the observations.
Existing probabilistic programming languages typically support a constrained form of observation—such as requiring the distribution in explicit form—or a general weighting operation, like factor. This work explores the interaction between observation and the other computational features of the language. We present a big-step semantics of importance sampling with likelihood weighting for a core universal probabilistic programming language with observation propagation.
Extended abstract (PDF)
We have several papers proposing denotational semantics for various PP formalisms. It would be good to be able to compare them. Here’s a challenge to the authors of these papers:
For your semantics, please give a few examples of each of the following:
- interesting equivalences between open terms that are equated in your semantics. These should include both equivalences that are expressible in other formalisms, and those that are particular to the language that you are modeling.
- interesting equivalences between open terms that are not equated in your semantics. Please explain why you believe these pairs of terms should be equivalent, and why they are not equated by your semantics.
- terms that are equated in your semantics, but perhaps should not be considered equivalent (if any).
You can do this in the comments or as new posts, or at worst, in your posters.
Having these examples in hand will enable us to have much more fruitful discussions about the various proposed semantics.
Looking forward to a fun discussion,
We identify two problems with Wingate et al. lightweight implementation technique for probabilistic programs. Simple examples demonstrate that common, what should be semantic-preserving program transformations drastically alter the program behavior. We briefly describe an alternative technique that does respect the program refactoring. There remains a question of how to be really, formally sure that the MCMC acceptance ratio is computed correctly.
Lightweight Problems (extended abstract)
PDF of draft, really this time
Draft paper as submitted to PPS’16.
We extend a stream-processing programming language, whose denotational semantics is given using the domain of streams, so that it can support probabilistic action. We give a denotational semantics to the extended language (which we call SProcP). We take the domain of probability distributions over streams as the denotational domain; it constitutes a cpo according to Saheb-Djaromi .
This work is motivated by the second author’s recent hybrid-system verification work . In this work, we used nonstandard analysis  to the denotational semantics of a stream-processing language so that it accommodates infinitesimal values. The extended language can express continuous signals and computation on them. We plan to do the same extension to the current language to obtain a language for processing probabilistic continuous-time signal.
The extension is for the most part straightforward. A tricky part is the denotation of binary operations e1 + e2. Since the denotation of the expressions e1 and e2 are in general not independent from each other, we cannot write define its denotation using those of e1 and e2. Our current definition is to assume an opaque function that gives a joint distribution of the denotations of e1 and e2. If we can reason that e1 is independent from e2 (by some program analysis), the denotation of e1 + e2 can be given in more explicit way.
Our apology that we have not revised our contribution to reflect the reviewers’ valuable comments yet. We may post an updated version later.
(On Jan. 17, 2016) We have uploaded the (slightly) revised version. PDF
 N. Saheb-Djahromi. Cpo’s of measures for nondeterminism. Theor. Comput. Sci., 12:19–37, 1980.
 K. Suenaga, H. Sekine, and I. Hasuo. Hyperstream processing systems: nonstandard modeling of continuous-time signals. In POPL 2013, pages 417–430, 2013.
 A. Robinson. Non-standard analysis. Studies in logic and the foundations of mathematics. North-Holland Pub. Co., 1966. URL https://books.google.co.jp/books? id=- eJLAAAAMAAJ.
This is an abstract of a poster by Larry Moss, and Chung-chieh Shan, and Alexandra Silva. The one sentence summary of our proposal is to adapt ideas from coalgebraic semantics to the probabilistic setting, thereby obtaining intuitive semantics for probabilistic processes of various sorts. measures.