Results for "Hananel Hazan"

total 87took 0.10s
Locally Connected Spiking Neural Networks for Unsupervised Feature LearningApr 12 2019In recent years, Spiking Neural Networks (SNNs) have demonstrated great successes in completing various Machine Learning tasks. We introduce a method for learning image features by \textit{locally connected layers} in SNNs using spike-timing-dependent ... More
A maximum entropy network reconstruction of macroeconomic modelsJul 27 2018Dec 07 2018In this article the problem of reconstructing the pattern of connection between agents from partial empirical data in a macro-economic model is addressed, given a set of behavioral equations. This systemic point of view puts the focus on distributional ... More
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent modelJan 05 2016Sep 02 2016We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP).The set of solutions is a polytope, which volume depends on the constraintsapplied and reveals the potential fragility ... More
Approximate Convex Optimization by Online Game PlayingOct 19 2006Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a $\epsilon$ approximate solution is proportional to $\frac{1}{\epsilon^2}$. Recently, ... More
BindsNET: A machine learning-oriented spiking neural networks library in PythonJun 04 2018Dec 10 2018The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, ... More
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI gamesMar 26 2019Various implementations of Deep Reinforcement Learning (RL) demonstrated excellent performance on tasks that can be solved by trained policy, but they are not without drawbacks. Deep RL suffers from high sensitivity to noisy and missing input and adversarial ... More
Unsupervised Learning with Self-Organizing Spiking Neural NetworksJul 24 2018We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters ... More
The Computational Power of Optimization in Online LearningApr 08 2015Jan 27 2016We consider the fundamental problem of prediction with expert advice where the experts are "optimizable": there is a black-box optimization oracle that can be used to compute, in constant time, the leading expert in retrospect at any point in time. In ... More
Optimal Algorithms for Ridge and Lasso Regression with Partially Observed AttributesAug 23 2011Nov 27 2012We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple ... More
Convergent Message-Passing Algorithms for Inference over General Graphs with Convex Free EnergiesJun 13 2012Inference problems in graphical models can be represented as a constrained optimization of a free energy function. It is known that when the Bethe free energy is used, the fixedpoints of the belief propagation (BP) algorithm correspond to the local minima ... More
Faster Rates for the Frank-Wolfe Method over Strongly-Convex SetsJun 05 2014Aug 14 2015The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections ... More
On a q-Identity Arising from the Dimension of a Representation of GL(n) over a Finite FieldNov 20 2016The present paper proves a $q$-identity, which arises from a representation $\pi_{N,\psi}$ of $\text{GL}_n(\mathbb{F}_q)$. This identity gives a significant simplification for the dimension of $\pi_{N,\psi}$, which allowed the second author to obtain ... More
Projection-free Online LearningJun 18 2012The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe ... More
An optimal algorithm for stochastic strongly-convex optimizationJun 12 2010We consider stochastic convex optimization with a strongly convex (but not necessarily smooth) objective. We give an algorithm which performs only gradient updates with optimal rate of convergence.
Linear Regression with Limited ObservationJun 18 2012We consider the most common variants of linear regression, including Ridge, Lasso and Support-vector regression, in a setting where the learner is allowed to observe only a fixed number of attributes of each example at training time. We present simple ... More
(weak) Calibration is Computationally HardFeb 20 2012We show that the existence of a computationally efficient calibration algorithm, with a low weak calibration rate, would imply the existence of an efficient algorithm for computing approximate Nash equilibria - thus implying the unlikely conclusion that ... More
Approximated Structured Prediction for Learning Large Scale Graphical ModelsJun 15 2010Jul 09 2012This manuscripts contains the proofs for "A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction".
Lower Bounds for Higher-Order Convex OptimizationOct 27 2017State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial ... More
An optimal algorithm for bandit convex optimizationMar 14 2016Mar 15 2016We consider the problem of online convex optimization against an arbitrary adversary with bandit feedback, known as bandit convex optimization. We give the first $\tilde{O}(\sqrt{T})$-regret algorithm for this setting based on a novel application of the ... More
A Linear-Time Algorithm for Trust Region ProblemsJan 27 2014We consider the fundamental problem of maximizing a general quadratic function over an ellipsoidal domain, also known as the trust region problem. We give the first provable linear-time (in the number of non-zero entries of the input) algorithm for approximately ... More
A Non-generative Framework and Convex Relaxations for Unsupervised LearningOct 04 2016Oct 05 2016We give a novel formal theoretical framework for unsupervised learning with two distinctive characteristics. First, it does not assume any generative model and based on a worst-case performance metric. Second, it is comparative, namely performance is ... More
Fast and Simple PCA via Convex OptimizationSep 18 2015Nov 25 2015The problem of principle component analysis (PCA) is traditionally solved by spectral or algebraic methods. We show how computing the leading principal component could be reduced to solving a \textit{small} number of well-conditioned {\it convex} optimization ... More
Steps Toward Deep Kernel Methods from Infinite Neural NetworksAug 20 2015Sep 02 2015Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context ... More
Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate InferenceMar 18 2009Jun 28 2010In this paper we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified message-passing algorithm architecture. We generalize the Belief Propagation ... More
On the Partition Function and Random Maximum A-Posteriori PerturbationsJun 27 2012In this paper we relate the partition function to the max-statistics of random variables. In particular, we provide a novel framework for approximating and bounding the partition function using MAP inference on randomly perturbed models. As a result, ... More
Almost Optimal Sublinear Time Algorithm for Semidefinite ProgrammingAug 26 2012We present an algorithm for approximating semidefinite programs with running time that is sublinear in the number of entries in the semidefinite instance. We also present lower bounds that show our algorithm to have a nearly optimal running time.
Universal MMSE Filtering With Logarithmic Adaptive RegretNov 04 2011Nov 14 2011We consider the problem of online estimation of a real-valued signal corrupted by oblivious zero-mean noise using linear estimators. The estimator is required to iteratively predict the underlying signal based on the current and several last noisy observations, ... More
On Certain Degenerate Whittaker Models for Cuspidal Representations of $\mathrm{GL}_{k\cdot n}\left(\mathbb{F}_q\right)$Jul 23 2017Nov 16 2017Let $\pi$ be an irreducible cuspidal representation of $\mathrm{GL}_{kn}\left(\mathbb{F}_q\right)$. Assume that $\pi = \pi_{\theta}$, corresponds to a regular character $\theta$ of $\mathbb{F}_{q^{kn}}^{*}$. We consider the twisted Jacquet module of $\pi$ ... More
Phase modulated pulse interferometry for simultaneous multi-channel ultrasound detectionJan 16 2019In optical detection of ultrasound, resonators with high Q-factors are often used to maximize sensitivity. However, in order to perform parallel interrogation, conventional interferometric techniques require an overlap between the resonator spectra, which ... More
A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic OptimizationJan 20 2013Aug 14 2015Linear optimization is many times algorithmically simpler than non-linear convex optimization. Linear optimization over matroid polytopes, matching polytopes and path polytopes are example of problems for which we have simple and efficient combinatorial ... More
Variance-Reduced and Projection-Free Stochastic OptimizationFeb 05 2016The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints. However, in the stochastic learning setting, it is still relatively ... More
Faster Convex Optimization: Simulated Annealing with an Efficient Universal BarrierJul 09 2015Nov 05 2015This paper explores a surprising equivalence between two seemingly-distinct convex optimization methods. We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior ... More
A Schelling model with switching agents: decreasing segregation via random allocation and social mobilityDec 20 2012Sep 26 2013We study the behaviour of a Schelling-class system in which a fraction $f$ of spatially-fixed switching agents is introduced. This new model allows for multiple interpretations, including: (i) random, non-preferential allocation (\textit{e.g.} by housing ... More
Determination of Handedness in a Single Chiral Nanocrystal via Circularly Polarized LuminescenceAug 16 2018The occurrence of biological homochirality is attributed to symmetry breaking mechanisms which are still debatable1. Studies of symmetry breaking require tools for monitoring the population ratios of individual chiral nano-objects, such as molecules, ... More
Variance Reduction for Faster Non-Convex OptimizationMar 17 2016Aug 25 2016We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization ... More
Optimal Black-Box Reductions Between Optimization ObjectivesMar 17 2016May 20 2016The diverse world of machine learning applications has given rise to a plethora of algorithms and optimization methods, finely tuned to the specific regression or classification task at hand. We reduce the complexity of algorithm design for machine learning ... More
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori PerturbationsSep 29 2013In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing low dimensional perturbations ... More
High-Order Attention Models for Visual Question AnsweringNov 12 2017The quest for algorithms that enable cognitive abilities is an important part of machine learning. A common trait in many recently investigated cognitive-like tasks is that they take into account different data modalities, such as visual and textual input. ... More
Spontaneous and Directed Symmetry Breaking in the Formation of Chiral NanocrystalsAug 16 2018Aug 24 2018The homochirality of biomolecules remains one of the outstanding puzzles concerning the beginning of life. Chiral amplification of a randomly perturbed racemic mixture of chiral molecules is a well-accepted prerequisite for all routes to biological homochirality. ... More
Classification with Low Rank and Missing DataJan 14 2015We consider classification and regression tasks where we have missing data and assume that the (clean) data resides in a low rank subspace. Finding a hidden subspace is known to be computationally hard. Nevertheless, using a non-proper formulation we ... More
Second Order Stochastic Optimization in Linear TimeFeb 12 2016Oct 14 2016First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high ... More
Exponentiated Gradient Meets Gradient DescentFeb 05 2019The (stochastic) gradient descent and the multiplicative update method are probably the most popular algorithms in machine learning. We introduce and study a new regularization which provides a unification of the additive and multiplicative updates. This ... More
Learning in Non-convex Games with an Optimization OracleOct 17 2018Feb 01 2019We consider online learning in an adversarial, non-convex setting under the assumption that the learner has an access to an offline optimization oracle. In the general setting of prediction with expert advice, Hazan et al. (2016) established that in the ... More
Second-Order Stochastic Optimization for Machine Learning in Linear TimeFeb 12 2016Nov 30 2017First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high ... More
Learning Linear Dynamical Systems via Spectral FilteringNov 02 2017We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize ... More
Tight Bounds for Bandit Combinatorial OptimizationFeb 24 2017We revisit the study of optimal regret rates in bandit combinatorial optimization---a fundamental framework for sequential decision making under uncertainty that abstracts numerous combinatorial prediction problems. We prove that the attainable regret ... More
Online Learning with Feedback Graphs Without the GraphsMay 23 2016We study an online learning framework introduced by Mannor and Shamir (2011) in which the feedback is specified by a graph, in a setting where the graph may vary from round to round and is \emph{never fully revealed} to the learner. We show a large gap ... More
On the Optimization of Deep Networks: Implicit Acceleration by OverparameterizationFeb 19 2018Jun 11 2018Conventional wisdom in deep learning states that increasing depth improves expressiveness but complicates optimization. This paper suggests that, sometimes, increasing depth can speed up optimization. The effect of depth on optimization is decoupled from ... More
Volumetric Spanners: an Efficient Exploration Basis for LearningDec 21 2013May 25 2014Numerous machine learning problems require an exploration basis - a mechanism to explore the action space. We define a novel geometric notion of exploration basis with low variance, called volumetric spanners, and give efficient algorithms to construct ... More
Blackwell Approachability and Low-Regret Learning are EquivalentNov 08 2010We consider the celebrated Blackwell Approachability Theorem for two-player games with vector payoffs. We show that Blackwell's result is equivalent, via efficient reductions, to the existence of "no-regret" algorithms for Online Linear Optimization. ... More
A Simple Baseline for Audio-Visual Scene-Aware DialogApr 11 2019The recently proposed audio-visual scene-aware dialog task paves the way to a more data-driven way of learning virtual assistants, smart speakers and car navigation systems. However, very little is known to date about how to effectively extract meaningful ... More
Logistic Regression: Tight Bounds for Stochastic and Online OptimizationMay 15 2014The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 0-1 loss. In this paper we investigate the question of whether these smoothness and convexity properties make the logistic ... More
Online Convex Optimization Against Adversaries with Memory and Application to Statistical ArbitrageFeb 27 2013Jun 10 2014The framework of online learning with memory naturally captures learning problems with temporal constraints, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization ... More
Near-Optimal Algorithms for Online Matrix PredictionMar 31 2012In several online prediction problems of recent interest the comparison class is composed of matrices with bounded entries. For example, in the online max-cut problem, the comparison class is matrices which represent cuts of a given graph and in online ... More
Tightening Fractional Covering Upper Bounds on the Partition Function for High-Order Region GraphsOct 16 2012In this paper we present a new approach for tightening upper bounds on the partition function. Our upper bounds are based on fractional covering bounds on the entropy function, and result in a concave program to compute these bounds and a convex program ... More
Efficient Regret Minimization in Non-Convex GamesJul 31 2017We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes ... More
Sublinear Optimization for Machine LearningOct 21 2010We give sublinear-time approximation algorithms for some optimization problems arising in machine learning, such as training linear classifiers and finding minimum enclosing balls. Our algorithms can be extended to some kernelized versions of these problems, ... More
Efficient Structured Prediction with Latent Variables for General Graphical ModelsJun 27 2012In this paper we propose a unified framework for structured prediction with latent variables which includes hidden conditional random fields and latent structured support vector machines as special cases. We describe a local entropy approximation for ... More
On Measure Concentration of Random Maximum A-Posteriori PerturbationsOct 15 2013The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from ... More
Extreme Tensoring for Low-Memory PreconditioningFeb 12 2019State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption. This has created a recent demand for memory-efficient optimizers. To this end, we investigate the limits and performance tradeoffs ... More
Spectral Filtering for General Linear Dynamical SystemsFeb 12 2018We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix. The algorithm extends the recently introduced technique ... More
Blending Learning and Inference in Structured PredictionOct 08 2012Aug 30 2013In this paper we derive an efficient algorithm to learn the parameters of structured predictors in general graphical models. This algorithm blends the learning and inference tasks, which results in a significant speedup over traditional approaches, such ... More
Continuous Markov Random Fields for Robust Stereo EstimationApr 06 2012In this paper we present a novel slanted-plane MRF model which reasons jointly about occlusion boundaries as well as depth. We formulate the problem as the one of inference in a hybrid MRF composed of both continuous (i.e., slanted 3D planes) and discrete ... More
Multidimensional Urban Segregation - Toward A Neural Network MeasureMay 09 2017Jun 05 2018We introduce a multidimensional, neural-network approach to reveal and measure urban segregation phenomena, based on the Self-Organizing Map algorithm (SOM). The multidimensionality of SOM allows one to apprehend a large number of variables simultaneously, ... More
Oracle-Based Robust Optimization via Online LearningFeb 25 2014Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved ... More
Online Learning for Time Series PredictionFeb 27 2013In this paper we address the problem of predicting a time series using the ARMA (autoregressive moving average) model, under minimal assumptions on the noise terms. Using regret minimization techniques, we develop effective online learning algorithms ... More
Mammography Dual View Mass CorrespondenceJul 02 2018Standard breast cancer screening involves the acquisition of two mammography X-ray projections for each breast. Typically, a comparison of both views supports the challenging task of tumor detection and localization. We introduce a deep learning, patch-based ... More
Online Learning of Quantum StatesFeb 25 2018Oct 01 2018Suppose we have many copies of an unknown $n$-qubit state $\rho$. We measure some copies of $\rho$ using a known two-outcome measurement $E_{1}$, then other copies using a measurement $E_{2}$, and so on. At each stage $t$, we generate a current hypothesis ... More
Factor Graph AttentionApr 11 2019Dialog is an effective way to exchange information, but subtle details and nuances are extremely important. While significant progress has paved a path to address visual dialog with algorithms, details and nuances remain a challenge. Attention mechanisms ... More
Beyond Convexity: Stochastic Quasi-Convex OptimizationJul 08 2015Oct 28 2015Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) ... More
Online Learning with Low Rank ExpertsMar 21 2016May 23 2016We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace. We devise algorithms with regret bounds that are independent of the number ... More
Online Gradient BoostingJun 16 2015Oct 30 2015We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that ... More
Direct Optimization through $\arg \max$ for Discrete Variational Auto-EncoderJun 07 2018Feb 09 2019Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation ... More
On Graduated Optimization for Stochastic Non-Convex ProblemsMar 12 2015Jul 08 2015The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite its popularity, very little is known in terms of theoretical ... More
Enhanced sensitivity of silicon-photonics-based ultrasound detection via BCB coatingJan 13 2019Ultrasound detection via silicon waveguides relies on the ability of acoustic waves to modulate the effective refractive index of the guided modes. However, the low photo-elastic response of silicon and silica limits the sensitivity of conventional silicon-on-insulator ... More
Faster Eigenvector Computation via Shift-and-Invert PreconditioningMay 26 2016We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix $\Sigma$ -- i.e. computing a unit vector $x$ such that $x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)$: Offline Eigenvector Estimation: Given an ... More
Finding Approximate Local Minima Faster than Gradient DescentNov 03 2016Apr 24 2017We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to ... More
High Dimensional Inference with Random Maximum A-Posteriori PerturbationsFeb 10 2016Nov 15 2016This paper presents a new approach, called perturb-max, for high-dimensional statistical inference that is based on applying random perturbations followed by optimization. This framework injects randomness to maximum a-posteriori (MAP) predictors by randomly ... More
Multicuts and Perturb & MAP for Probabilistic Graph ClusteringJan 09 2016We present a probabilistic graphical model formulation for the graph clustering problem. This enables to locally represent uncertainty of image partitions by approximate marginal distributions in a mathematically substantiated way, and to rectify local ... More
Online Control with Adversarial DisturbancesFeb 23 2019We study the control of a linear dynamical system with adversarial disturbances (as opposed to statistical noise). The objective we consider is one of regret: we desire an online control procedure that can do nearly as well as that of a procedure that ... More
Finding Approximate Local Minima for Nonconvex Optimization in Linear TimeNov 03 2016Nov 04 2016We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which is linear in the input representation. The time complexity of our algorithm to find an approximate local minimum is even ... More
High Dimensional Inference with Random Maximum A-Posteriori PerturbationsFeb 10 2016In this work we present a new approach for high-dimensional statistical inference that is based on optimization and random perturbations. This framework injects randomness to maximum a-posteriori (MAP) predictors by randomly perturbing its potential function. ... More
The Case for Full-Matrix Adaptive RegularizationJun 08 2018Adaptive regularization methods come in diagonal and full-matrix variants. However, only the former have enjoyed widespread adoption in training large-scale deep models. This is due to the computational overhead of manipulating a full matrix in high dimension. ... More
Finding Local Minima for Nonconvex Optimization in Linear TimeNov 03 2016We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which is linear in the input representation. The previously fastest methods run in time proportional to matrix inversion or worse. ... More
Software Design Document, Testing, Deployment and Configuration Management of the IUfA's UUIS -- a Team 3 COMP5541-W10 Project ApproachMay 05 2010The purpose of this document is to provide technical specifications concerned to the Design of the University Unified Inventory System - Web Portal, of the UIfA. The Team of Developers used a Feedback Waterfall approach to build up the system, under an ... More
Software Requirements Specification of the IUfA's UUIS -- a Team 3 COMP5541-W10 Project ApproachMay 04 2010The purpose of this document is to specify the requirements of the University Unified Inventory System, of the UIfA. The Team of Analysts used a Feedback Waterfall approach to collect the requirements. UML diagrams, such as Use case diagrams, Block Diagrams, ... More
Role of sonication pre-treatment and cation valence in nano-cellulose suspensions sol-gel transitionMay 19 2017Sol-gel transition of carboxylated cellulose nanocrystals is investigated using rheology, SAXS, NMR and optical spectroscopies to unveil the distinctive roles of ultrasounds treatment and ions addition. Besides cellulose fibers fragmentation, sonication ... More