Have you already encountered a model that you know is scientifically sound, but that MCMC just wouldn’t run? The model would take forever to run — if it ever ran — and you would be greeted with a lot of divergences in the end. Yeah, I know, my stress levels start raising too whenever I hear the word « divergences »…

Well, you’ll be glad to hear there are tricks to make these models run, and one of these tricks is called re-parametrization — I bet you already heard about the poorly-named non-centered parametrization?

Well fear no more! In this episode, Maria Gorinova will tell you all about these model re-parametrizations! Maria is a PhD student in Data Science & AI at the University of Edinburgh. Her broad interests range from programming languages and verification, to machine learning and human-computer interaction. 

More specifically, Maria is interested in probabilistic programming languages, and in exploring ways of applying program-analysis techniques to existing PPLs in order to improve usability of the language or efficiency of inference.

As you’ll hear in the episode, she thinks a lot about the language aspect of probabilistic programming, and works on the automation of various “tricks” in probabilistic programming: automatic re-parametrization, automatic marginalization, automatic and efficient model-specific inference.

As Maria also has experience with several PPLs like Stan, Edward2 and TensorFlow Probability, she’ll tell us what she thinks a good PPL design requires, and what the future of PPLs looks like to her.

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) !

Links from the show:

Maria on the Web: http://homepages.inf.ed.ac.uk/s1207807/index.html (http://homepages.inf.ed.ac.uk/s1207807/index.html)

Maria on Twitter: https://twitter.com/migorinova (https://twitter.com/migorinova)

Maria on GitHub: https://github.com/mgorinova (https://github.com/mgorinova)

Automatic Reparameterisation of Probabilistic Programs (Maria's paper with Dave Moore and Matthew Hoffman): https://arxiv.org/abs/1906.03028 (https://arxiv.org/abs/1906.03028)

Stan User's Guide on Reparameterization: https://mc-stan.org/docs/2_23/stan-users-guide/reparameterization-section.html (https://mc-stan.org/docs/2_23/stan-users-guide/reparameterization-section.html)

HMC for hierarchical models -- Background on reparameterization: https://arxiv.org/abs/1312.0906 (https://arxiv.org/abs/1312.0906)

NeuTra -- Automatic reparameterization: https://arxiv.org/abs/1903.03704 (https://arxiv.org/abs/1903.03704)

Edward2 -- A library for probabilistic modeling, inference, and criticism: http://edwardlib.org/ (http://edwardlib.org/)

Pyro -- Automatic reparameterization and marginalization: https://pyro.ai/ (https://pyro.ai/)

Gen -- Programmable inference: http://probcomp.csail.mit.edu/software/gen/ (http://probcomp.csail.mit.edu/software/gen/)

TensorFlow Probability: https://www.tensorflow.org/probability/ (https://www.tensorflow.org/probability/)

---

Send in a voice message: https://anchor.fm/learn-bayes-stats/message Support this podcast

Have you already encountered a model that you know is scientifically sound, but that MCMC just wouldn’t run? The model would take forever to run — if it ever ran — and you would be greeted with a lot of divergences in the end. Yeah, I know, my stress levels start raising too whenever I hear the word « divergences »…


Well, you’ll be glad to hear there are tricks to make these models run, and one of these tricks is called re-parametrization — I bet you already heard about the poorly-named non-centered parametrization?


Well fear no more! In this episode, Maria Gorinova will tell you all about these model re-parametrizations! Maria is a PhD student in Data Science & AI at the University of Edinburgh. Her broad interests range from programming languages and verification, to machine learning and human-computer interaction. 


More specifically, Maria is interested in probabilistic programming languages, and in exploring ways of applying program-analysis techniques to existing PPLs in order to improve usability of the language or efficiency of inference.


As you’ll hear in the episode, she thinks a lot about the language aspect of probabilistic programming, and works on the automation of various “tricks” in probabilistic programming: automatic re-parametrization, automatic marginalization, automatic and efficient model-specific inference.


As Maria also has experience with several PPLs like Stan, Edward2 and TensorFlow Probability, she’ll tell us what she thinks a good PPL design requires, and what the future of PPLs looks like to her.


Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !


Links from the show:

Maria on the Web: http://homepages.inf.ed.ac.uk/s1207807/index.html
Maria on Twitter: https://twitter.com/migorinova
Maria on GitHub: https://github.com/mgorinova
Automatic Reparameterisation of Probabilistic Programs (Maria's paper with Dave Moore and Matthew Hoffman): https://arxiv.org/abs/1906.03028
Stan User's Guide on Reparameterization: https://mc-stan.org/docs/2_23/stan-users-guide/reparameterization-section.html
HMC for hierarchical models -- Background on reparameterization: https://arxiv.org/abs/1312.0906
NeuTra -- Automatic reparameterization: https://arxiv.org/abs/1903.03704
Edward2 -- A library for probabilistic modeling, inference, and criticism: http://edwardlib.org/
Pyro -- Automatic reparameterization and marginalization: https://pyro.ai/
Gen -- Programmable inference: http://probcomp.csail.mit.edu/software/gen/
TensorFlow Probability: https://www.tensorflow.org/probability/

---

Send in a voice message: https://anchor.fm/learn-bayes-stats/message

Twitter Mentions