<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><style id="css_styles">
blockquote.cite { margin-left: 5px; margin-right: 0px; padding-left: 10px; padding-right:0px; border-left: 1px solid #cccccc }
blockquote.cite2 {margin-left: 5px; margin-right: 0px; padding-left: 10px; padding-right:0px; border-left: 1px solid #cccccc; margin-top: 3px; padding-top: 0px; }
a img { border: 0px; }
li[style='text-align: center;'], li[style='text-align: center; '], li[style='text-align: right;'], li[style='text-align: right; '] { list-style-position: inside;}
body { font-family: Segoe UI; font-size: 12pt; }
.quote { margin-left: 1em; margin-right: 1em; border-left: 5px #ebebeb solid; padding-left: 0.3em; }
</style>
</head>
<body><span style="background-color:rgba(0,0,0,0);">Welcome to the </span>IDA<span style="background-color:rgba(0,0,0,0);"> </span>Machine<span style="background-color:rgba(0,0,0,0);"> </span>Learning<span style="background-color:rgba(0,0,0,0);"> </span>Seminar on Wednesday, April 21, 15:15 (Swedish time)<div><br></div><div><b>Rémi Bardenet</b>, CNRS & CRIStAL, Université de Lille, France</div><div><a href="http://rbardenet.github.io/">http://rbardenet.github.io/</a></div><div><br></div><div><b>Monte Carlo integration with repulsive point processes</b><br><i>Abstract:</i> Monte Carlo integration is the workhorse of Bayesian inference, but the mean square error of Monte Carlo estimators decreases slowly, typically as 1/N, where N is the number of integrand evaluations. This becomes a bottleneck in Bayesian applications where evaluating the integrand can take tens of seconds, like in the life sciences, where evaluating the likelihood often requires solving a large system of differential equations. I will present two approaches to faster Monte Carlo rates using interacting particle systems. First, I will show how results from random matrix theory lead to a stochastic version of Gaussian quadrature in any dimension d, with mean square error decreasing as 1/N^{1+1/d}. This quadrature is based on determinantal point processes, which can be argued to be the kernel machine of point processes. Second, I will show how to further take this error rate down assuming the integrand is smooth. In particular, I will give a tight error bound when the integrand belongs to any arbitrary reproducing kernel Hilbert space, using a mixture of determinantal point processes tailored to that space. This mixture is reminiscent of volume sampling, a randomized experimental design used in linear regression.<br><br>Joint work with Adrien Hardy, Ayoub Belhadji, Pierre Chainais<br><br><i>Zoom link:</i> <a href="https://liu-se.zoom.us/j/69011766298">https://liu-se.zoom.us/j/69011766298</a><br><i>Passcode:</i> 742124<br><br></div><div><br></div><div><br></div><div><div style="background-color:rgba(0,0,0,0);">-------<br>The list of future seminars in the series is available at <a href="http://www.ida.liu.se/research/machinelearning/seminars/">http://www.ida.liu.se/research/machinelearning/seminars/</a>.<br><br></div><div style="background-color:rgba(0,0,0,0);">Welcome!<br>IDA Machine Learning Group</div></div><div style="background-color:rgba(0,0,0,0);">Linköping University</div>
</body></html>