Results for "Max Welling"

total 2483took 0.12s
On the Choice of Regions for Generalized Belief PropagationJul 11 2012Generalized belief propagation (GBP) has proven to be a promising technique for approximate inference tasks in AI and machine learning. However, the choice of a good set of clusters to be used in GBP has remained more of an art then a science until this ... More
Bayesian Structure Learning for Markov Random Fields with a Spike and Slab PriorAug 09 2014In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the ... More
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free InferenceJun 11 2015Dec 02 2015We describe an embarrassingly parallel, anytime Monte Carlo method for likelihood-free models. The algorithm starts with the view that the stochasticity of the pseudo-samples generated by the simulator can be controlled externally by a vector of random ... More
Semi-Supervised Classification with Graph Convolutional NetworksSep 09 2016We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via ... More
Semi-Supervised Classification with Graph Convolutional NetworksSep 09 2016Oct 24 2016We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via ... More
Deep Scale-spaces: Equivariance Over ScaleMay 28 2019We introduce deep scale-spaces (DSS), a generalization of convolutional neural networks, exploiting the scale symmetry structure of conventional image recognition tasks. Put plainly, the class of an image is invariant to the scale at which it is viewed. ... More
Exploiting the Statistics of Learning and InferenceFeb 26 2014Mar 04 2014When dealing with datasets containing a billion instances or with simulations that require a supercomputer to execute, computational resources become part of the equation. We can improve the efficiency of learning and inference by exploiting their inherent ... More
Herding Dynamic Weights for Partially Observed Random Field ModelsMay 09 2012Learning the parameters of a (potentially partially observable) random field model is intractable in general. Instead of focussing on a single optimal parameter value we propose to treat parameters as dynamical quantities. We introduce an algorithm to ... More
Bayesian Random Fields: The Bethe-Laplace ApproximationJun 27 2012While learning the maximum likelihood value of parameters of an undirected graphical model is hard, modelling the posterior distribution over parameters given data is harder. Yet, undirected models are ubiquitous in computer vision and text modelling ... More
Learning the Irreducible Representations of Commutative Lie GroupsFeb 18 2014May 25 2014We present a new probabilistic model of compact commutative Lie groups that produces invariant-equivariant and disentangled representations of data. To define the notion of disentangling, we borrow a fundamental principle from physics that is used to ... More
Supervised Uncertainty Quantification for Segmentation with Multiple AnnotationsJul 03 2019The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used ... More
A note on privacy preserving iteratively reweighted least squaresMay 24 2016Iteratively reweighted least squares (IRLS) is a widely-used method in machine learning to estimate the parameters in the generalised linear models. In particular, IRLS for L1 minimisation under the linear model provides a closed-form solution in each ... More
Recurrent Inference Machines for Solving Inverse ProblemsJun 13 2017Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural ... More
Herding as a Learning System with Edge-of-Chaos DynamicsFeb 09 2016Mar 01 2016Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" ... More
Multiplicative Normalizing Flows for Variational Bayesian Neural NetworksMar 06 2017Jun 12 2017We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward ... More
Structured and Efficient Variational Deep Learning with Matrix Gaussian PosteriorsMar 15 2016Jun 23 2016We introduce a variational Bayesian neural network where the parameters are governed via a probability distribution on random matrices. Specifically, we employ a matrix variate Gaussian \cite{gupta1999matrix} parameter posterior distribution where we ... More
GPS-ABC: Gaussian Process Surrogate Approximate Bayesian ComputationJan 13 2014Scientists often express their understanding of the world through a computationally demanding simulation program. Analyzing the posterior distribution of the parameters given observations (the inverse problem) can be extremely challenging. The Approximate ... More
Differentiable probabilistic models of scientific imaging with the Fourier slice theoremJun 18 2019Jun 20 2019Scientific imaging techniques such as optical and electron microscopy and computed tomography (CT) scanning are used to study the 3D structure of an object through 2D observations. These observations are related to the original 3D object through orthogonal ... More
Bayesian Structure Learning for Markov Random Fields with a Spike and Slab PriorJun 05 2012Jun 23 2012In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the ... More
POPE: Post Optimization Posterior Evaluation of Likelihood Free ModelsDec 09 2014In many domains, scientists build complex simulators of natural phenomena that encode their hypotheses about the underlying processes. These simulators can be deterministic or stochastic, fast or slow, constrained or unconstrained, and so on. Optimizing ... More
Deep Spiking NetworksFeb 26 2016We introduce the Spiking Multi-Layer Perceptron (SMLP). The SMLP is a spiking version of a conventional Multi-Layer Perceptron with rectified-linear units. Our architecture is event-based, meaning that neurons in the network communicate by sending "events" ... More
Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural NetsFeb 03 2014Jan 22 2015Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between ... More
Group Equivariant Convolutional NetworksFeb 24 2016Jun 03 2016We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially ... More
Sigma Delta Quantized NetworksNov 07 2016Deep neural networks can be obscenely wasteful. When processing video, a convolutional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly doing ... More
Generalized Belief Propagation on Tree Robust Structured Region GraphsOct 16 2012This paper provides some new guidance in the construction of region graphs for Generalized Belief Propagation (GBP). We connect the problem of choosing the outer regions of a LoopStructured Region Graph (SRG) to that of finding a fundamental cycle basis ... More
An Introduction to Variational AutoencodersJun 06 2019Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions.
Harmonic Exponential Families on ManifoldsMay 17 2015May 20 2015In a range of fields including the geosciences, molecular biology, robotics and computer vision, one encounters problems that involve random variables on manifolds. Currently, there is a lack of flexible probabilistic models on manifolds that are fast ... More
Steerable CNNsDec 27 2016It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flexible class ... More
Improving Variational Auto-Encoders using convex combination linear Inverse Autoregressive FlowJun 07 2017Jun 14 2017In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow. The idea is to enrich a linear Inverse Autoregressive Flow by introducing multiple lower-triangular matrices with ones on ... More
Sigma Delta Quantized NetworksNov 07 2016Nov 10 2016Deep neural networks can be obscenely wasteful. When processing video, a convolutional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly doing ... More
Improving Variational Auto-Encoders using Householder FlowNov 29 2016Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with ... More
Variational Graph Auto-EncodersNov 21 2016We introduce the variational graph auto-encoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE). This model makes use of latent variables and is capable of learning interpretable latent ... More
Semi-Supervised Classification with Graph Convolutional NetworksSep 09 2016Feb 22 2017We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via ... More
Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief PropagationJan 10 2013We present a novel inference algorithm for arbitrary, binary, undirected graphs. Unlike loopy belief propagation, which iterates fixed point equations, we directly descend on the Bethe free energy. The algorithm consists of two phases, first we update ... More
Semi-Supervised Classification with Graph Convolutional NetworksSep 09 2016Nov 03 2016We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via ... More
Improving Variational Auto-Encoders using Householder FlowNov 29 2016Dec 07 2016Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with ... More
Auto-Encoding Variational BayesDec 20 2013May 01 2014How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning ... More
An Introduction to Variational AutoencodersJun 06 2019Jul 24 2019Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions.
Improving Variational Auto-Encoders using Householder FlowNov 29 2016Jan 27 2017Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with ... More
Deep Spiking NetworksFeb 26 2016Nov 07 2016We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold ... More
Transformation Properties of Learned Visual RepresentationsDec 24 2014Apr 07 2015When a three-dimensional object moves relative to an observer, a change occurs on the observer's image plane and in the visual representation computed by a learned model. Starting with the idea that a good visual representation is one that transforms ... More
VAE with a VampPriorMay 19 2017Feb 26 2018Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, ... More
A simple, exact, model of quasi-single field inflationJul 04 2019In this note we present a simple but exact model of quasi-single field inflation \cite{Chen:2009zp, Chen:2009we}, in which the couplings between perturbations are completely controlled, and for instance can be made constant with any desired value. This ... More
Hybrid Variational/Gibbs Collapsed Inference in Topic ModelsJun 13 2012Variational Bayesian inference and (collapsed) Gibbs sampling are the two important classes of inference algorithms for Bayesian networks. Both have their advantages and disadvantages: collapsed Gibbs sampling is unbiased but is also inefficient for large ... More
Super-Samples from Kernel HerdingMar 15 2012We extend the herding algorithm to continuous spaces by using the kernel trick. The resulting "kernel herding" algorithm is an infinite memory deterministic process that learns to approximate a PDF with a collection of samples. We show that kernel herding ... More
Soft Weight-Sharing for Neural Network CompressionFeb 13 2017May 09 2017The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. ... More
Combining Generative and Discriminative Models for Hybrid InferenceJun 06 2019Jun 10 2019A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation ... More
BOCK : Bayesian Optimization with Cylindrical KernelsJun 05 2018A major challenge in Bayesian Optimization is the boundary issue (Swersky, 2017) where an algorithm spends too many evaluations near the boundary of its search space. In this paper, we propose BOCK, Bayesian Optimization with Cylindrical Kernels, whose ... More
Primal-Dual Wasserstein GANMay 24 2018We introduce Primal-Dual Wasserstein GAN, a new learning algorithm for building latent variable models of the data distribution based on the primal and the dual formulations of the optimal transport (OT) problem. We utilize the primal formulation to learn ... More
Markov Chain Monte Carlo and Variational Inference: Bridging the GapOct 23 2014May 19 2015Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. This enables us to explore a new synthesis of variational ... More
Probabilistic Binary Neural NetworksSep 10 2018Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks. In this work, we present a probabilistic training method for Neural Network with both binary weights ... More
Improved Bayesian CompressionNov 17 2017Dec 07 2017Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited ... More
Attention, Learn to Solve Routing Problems!Mar 22 2018Feb 07 2019The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development. However, to push this idea towards practical implementation, we need better models and better ways of training. We ... More
Batch-Shaped Channel Gated NetworksJul 15 2019We present a method for gating deep-learning architectures on a fine-grained level. Individual convolutional maps are turned on/off conditionally on features in the network. This method allows us to train neural networks with a large capacity, but lower ... More
Scalable MCMC for Mixed Membership Stochastic BlockmodelsOct 16 2015Oct 22 2015We propose a stochastic gradient Markov chain Monte Carlo (SG-MCMC) algorithm for scalable inference in mixed-membership stochastic blockmodels (MMSB). Our algorithm is based on the stochastic gradient Riemannian Langevin sampler and achieves both faster ... More
Bayesian Posterior Sampling via Stochastic Gradient Fisher ScoringJun 27 2012In this paper we address the following question: Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?. An algorithm based on the Langevin equation ... More
Improving Variational Inference with Inverse Autoregressive FlowJun 15 2016We propose a simple and scalable method for improving the flexibility of variational inference through a transformation with autoregressive networks. Autoregressive networks, such as RNNs and MADE, are very powerful models; however, ancestral sampling ... More
Semisupervised Classifier Evaluation and RecalibrationOct 08 2012How many labeled examples are needed to estimate a classifier's performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is ... More
Variational Dropout and the Local Reparameterization TrickJun 08 2015Dec 20 2015We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization ... More
Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without ReplacementMar 14 2019May 29 2019The well-known Gumbel-Max trick for sampling from a categorical distribution can be extended to sample $k$ elements without replacement. We show how to implicitly apply this 'Gumbel-Top-$k$' trick on a factorized distribution over sequences, allowing ... More
Austerity in MCMC Land: Cutting the Metropolis-Hastings BudgetApr 19 2013Feb 14 2014Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets? We argue that computing the likelihood for N datapoints in the Metropolis-Hastings (MH) test to reach a single binary decision is computationally inefficient. ... More
Hamiltonian ABCMar 06 2015Approximate Bayesian computation (ABC) is a powerful and elegant framework for performing inference in simulation-based models. However, due to the difficulty in scaling likelihood estimates, ABC remains useful for relatively low-dimensional problems. ... More
Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without ReplacementMar 14 2019The well-known Gumbel-Max trick for sampling from a categorical distribution can be extended to sample $k$ elements without replacement. We show how to implicitly apply this 'Gumbel-Top-$k$' trick on a factorized distribution over sequences, allowing ... More
Bayesian Compression for Deep LearningMay 24 2017Nov 06 2017Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity ... More
Temporally Efficient Deep Learning with SpikesJun 13 2017The vast majority of natural sensory data is temporally redundant. Video frames or audio samples which are sampled at nearby points in time tend to have similar values. Typically, deep learning algorithms take no advantage of this redundancy to reduce ... More
Attention-based Deep Multiple Instance LearningFeb 13 2018Jun 28 2018Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability ... More
Deep Learning with Permutation-invariant Operator for Multi-instance Histopathology ClassificationDec 01 2017Dec 05 2017The computer-aided analysis of medical scans is a longstanding goal in the medical imaging field. Currently, deep learning has became a dominant methodology for supporting pathologists and radiologist. Deep learning algorithms have been successfully applied ... More
Combining Generative and Discriminative Models for Hybrid InferenceJun 06 2019A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation ... More
Learning Sparse Neural Networks through $L_0$ RegularizationDec 04 2017Jun 22 2018We propose a practical method for $L_0$ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, ... More
Combining Generative and Discriminative Models for Hybrid InferenceJun 06 2019Jun 20 2019A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model. However, in many cases the generating process is only a poor approximation ... More
Structured Region Graphs: Morphing EP into GBPJul 04 2012GBP and EP are two successful algorithms for approximate probabilistic inference, which are based on different approximation strategies. An open problem in both algorithms has been how to choose an appropriate approximation structure. We introduce 'structured ... More
The Variational Fair AutoencoderNov 03 2015Aug 10 2017We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding ... More
Emerging Convolutions for Generative Normalizing FlowsJan 30 2019Feb 20 2019Generative flows are attractive because they admit exact likelihood optimization and efficient image synthesis. Recently, Kingma & Dhariwal (2018) demonstrated with Glow that generative flows are capable of generating high quality images. We generalize ... More
Bayesian Dark KnowledgeJun 14 2015Nov 06 2015We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities, e.g., for applications involving bandits ... More
Data-Free Quantization through Weight Equalization and Bias CorrectionJun 11 2019We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization ... More
Variational Bayes In Private Settings (VIPS)Nov 01 2016Dec 03 2018Many applications of Bayesian data analysis involve sensitive information, motivating methods which ensure that privacy is protected. We introduce a general privacy-preserving framework for Variational Bayes (VB), a widely used optimization-based Bayesian ... More
3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric DataJul 06 2018Oct 27 2018We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant ... More
Efficient Parametric Projection Pursuit Density EstimationOct 19 2012Product models of low dimensional experts are a powerful way to avoid the curse of dimensionality. We present the ``under-complete product of experts' (UPoE), where each expert models a one dimensional projection of the data. The UPoE is fully tractable ... More
The Variational Fair AutoencoderNov 03 2015Feb 04 2016We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding ... More
Private Topic ModelingSep 14 2016We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA). The iterative nature of stochastic variational inference presents challenges: multiple iterations are required to obtain accurate posterior distributions, ... More
Variational Bayes In Private Settings (VIPS)Nov 01 2016Nov 28 2016We provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the ... More
Semi-Supervised Learning with Deep Generative ModelsJun 20 2014Oct 31 2014The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised ... More
Predictive Uncertainty through QuantizationOct 12 2018High-risk domains require reliable confidence estimates from predictive models. Deep latent variable models provide these, but suffer from the rigid variational distributions used for tractable inference, which err on the side of overconfidence. We propose ... More
DP-EM: Differentially Private Expectation MaximizationMay 23 2016Oct 31 2016The iterative nature of the expectation maximization (EM) algorithm presents a challenge for privacy-preserving estimation, as each iteration increases the amount of noise needed. We propose a practical private EM algorithm that overcomes this challenge ... More
Large-Scale Distributed Bayesian Matrix Factorization using Stochastic Gradient MCMCMar 05 2015Mar 10 2015Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this ... More
A New Method to Visualize Deep Neural NetworksMar 08 2016Jun 09 2016We present a method for visualising the response of a deep neural network to a specific input. For image data for instance our method will highlight areas that provide evidence in favor of, and against choosing a certain class. The method overcomes several ... More
Variational Bayes In Private Settings (VIPS)Nov 01 2016We provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the ... More
The Deep Weight PriorOct 16 2018Nov 27 2018Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for ... More
Emerging Convolutions for Generative Normalizing FlowsJan 30 2019Generative flows are attractive because they admit exact likelihood optimization and efficient image synthesis. Recently, Kingma & Dhariwal (2018) demonstrated with Glow that generative flows are capable of generating high quality images. We generalize ... More
Gauge Equivariant Convolutional Networks and the Icosahedral CNNFeb 11 2019Apr 22 2019The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems ... More
Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet AllocationMay 10 2013In the internet era there has been an explosion in the amount of digital text information available, leading to difficulties of scale for traditional inference algorithms for topic models. Recent advances in stochastic variational inference algorithms ... More
On Smoothing and Inference for Topic ModelsMay 09 2012Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modeling high-dimensional sparse count data. Various learning algorithms have been developed in recent years, including collapsed Gibbs sampling, variational inference, ... More
Graph Convolutional Matrix CompletionJun 07 2017Oct 25 2017We consider matrix completion for recommender systems from the point of view of link prediction on graphs. Interaction data such as movie ratings can be represented by a bipartite user-item graph with labeled edges denoting observed ratings. Building ... More
Emerging Convolutions for Generative Normalizing FlowsJan 30 2019May 20 2019Generative flows are attractive because they admit exact likelihood optimization and efficient image synthesis. Recently, Kingma & Dhariwal (2018) demonstrated with Glow that generative flows are capable of generating high quality images. We generalize ... More
Gauge Equivariant Convolutional Networks and the Icosahedral CNNFeb 11 2019May 13 2019The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems ... More
DIVA: Domain Invariant Variational AutoencodersMay 24 2019We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant Variational Autoencoder (DIVA), a generative ... More
The Functional Neural ProcessJun 19 2019We present a new family of exchangeable stochastic processes, the Functional Neural Processes (FNPs). FNPs model distributions over functions by learning a graph of dependencies on top of latent representations of the points in the given dataset. In doing ... More
Variational Bayes In Private Settings (VIPS)Nov 01 2016Dec 21 2016We provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the ... More
Gauge Equivariant Convolutional Networks and the Icosahedral CNNFeb 11 2019The idea of equivariance to symmetry transformations provides one of the first theoretically grounded principles for neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging ... More
The Deep Weight PriorOct 16 2018Feb 18 2019Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for ... More