Latest in stat.ml

total 27211took 0.12s
Reweighted Expectation MaximizationJun 13 2019Training deep generative models with maximum likelihood remains a challenge. The typical workaround is to use variational inference (VI) and maximize a lower bound to the log marginal likelihood of the data. Variational auto-encoders (VAEs) adopt this ... More
Overcoming Mean-Field Approximations in Recurrent Gaussian Process ModelsJun 13 2019We identify a new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process. Inference in this setting has either employed computationally intensive MCMC methods, or relied on factorisations of the ... More
Kernel and Deep Regimes in Overparametrized ModelsJun 13 2019A recent line of work studies overparametrized neural networks in the ``kernel regime,'' i.e.~when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS ... More
Telephonetic: Making Neural Language Models Robust to ASR and Semantic NoiseJun 13 2019Speech processing systems rely on robust feature extraction to handle phonetic and semantic variations found in natural language. While techniques exist for desensitizing features to common noise patterns produced by Speech-to-Text (STT) and Text-to-Speech ... More
Robust Regression for Safe Exploration in ControlJun 13 2019We study the problem of safe learning and exploration in sequential control problems. The goal is to safely collect data samples from an operating environment to learn an optimal controller. A central challenge in this setting is how to quantify uncertainty ... More
Lower Bounds for Adversarially Robust PAC LearningJun 13 2019In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversary's goal is to \emph{misclassify} the adversarially perturbed sample point $\widetilde{x}$, i.e., $h(\widetilde{x})\neq ... More
Deep Reinforcement Learning for Cyber SecurityJun 13 2019The scale of Internet-connected systems has increased considerably, and these systems are being exposed to cyber attacks more than ever. The complexity and dynamics of cyber attacks require protecting mechanisms to be responsive, adaptive, and large-scale. ... More
Modeling the Dynamics of PDE Systems with Physics-Constrained Deep Auto-Regressive NetworksJun 13 2019In recent years, deep learning has proven to be a viable methodology for surrogate modeling and uncertainty quantification for a vast number of physical systems. However, in their traditional form, such models require a large amount of training data. ... More
Nonlinear System Identification via Tensor CompletionJun 13 2019Function approximation from input and output data pairs constitutes a fundamental problem in supervised learning. Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a generic nonlinear system, ... More
Distributed High-dimensional Regression Under a Quantile Loss FunctionJun 13 2019This paper studies distributed estimation and support recovery for high-dimensional linear regression model with heavy-tailed noise. To deal with heavy-tailed noise whose variance can be infinite, we adopt the quantile regression loss function instead ... More
Iterative subtraction method for Feature RankingJun 13 2019Training features used to analyse physical processes are often highly correlated and determining which ones are most important for the classification is a non-trivial tasks. For the use case of a search for a top-quark pair produced in association with ... More
Training Neural Networks for and by InterpolationJun 13 2019The majority of modern deep learning models are able to interpolate the data: the empirical loss can be driven near zero on all samples simultaneously. In this work, we explicitly exploit this interpolation property for the design of a new optimization ... More
Variance Estimation For Online Regression via Spectrum ThresholdingJun 13 2019We consider the online linear regression problem, where the predictor vector may vary with time. This problem can be modelled as a linear dynamical system, where the parameters that need to be learned are the variance of both the process noise and the ... More
Cognitive Knowledge Graph Reasoning for One-shot Relational LearningJun 13 2019Inferring new facts from existing knowledge graphs (KG) with explainable reasoning processes is a significant problem and has received much attention recently. However, few studies have focused on relation types unseen in the original KG, given only one ... More
Robust and interpretable blind image denoising via bias-free convolutional neural networksJun 13 2019Deep convolutional networks often append additive constant ("bias") terms to their convolution operations, enabling a richer repertoire of functional mappings. Biases are also used to facilitate training, by subtracting mean response over batches of training ... More
Selective prediction-set models with coverage guaranteesJun 13 2019Though black-box predictors are state-of-the-art for many complex tasks, they often fail to properly quantify predictive uncertainty and may provide inappropriate predictions for unfamiliar data. Instead, we can learn more reliable models by letting them ... More
Reinforcement Learning of Spatio-Temporal Point ProcessesJun 13 2019Spatio-temporal event data is ubiquitous in various applications, such as social media, crime events, and electronic health records. Spatio-temporal point processes offer a versatile framework for modeling such event data, as it can jointly capture spatial ... More
Random Tessellation ForestsJun 13 2019Space partitioning methods such as random forests and the Mondrian process are powerful machine learning methods for multi-dimensional and relational data, and are based on recursively cutting a domain. The flexibility of these methods is often limited ... More
Copulas as High-Dimensional Generative Models: Vine Copula AutoencodersJun 12 2019We propose a vine copula autoencoder to construct flexible generative models for high-dimensional distributions in a straightforward three-step procedure. First, an autoencoder compresses the data using a lower dimensional representation. Second, the ... More
Efficient Evaluation-Time Uncertainty Estimation by Improved DistillationJun 12 2019In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks. For this, we propose a modified knowledge distillation procedure that achieves state-of-the-art uncertainty estimates both for in and out-of-distribution ... More
Flexible Modeling of Diversity with Strongly Log-Concave DistributionsJun 12 2019Strongly log-concave (SLC) distributions are a rich class of discrete probability distributions over subsets of some ground set. They are strictly more general than strongly Rayleigh (SR) distributions such as the well-known determinantal point process. ... More
Neural Graph Evolution: Towards Efficient Automatic Robot DesignJun 12 2019Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search ... More
Competing Bandits in Matching MarketsJun 12 2019Stable matching, a classical model for two-sided markets, has long been studied with little consideration for how each side's preferences are learned. With the advent of massive online markets powered by data-driven matching platforms, it has become necessary ... More
GANPOP: Generative Adversarial Network Prediction of Optical Properties from Single Snapshot Wide-field ImagesJun 12 2019We present a deep learning framework for wide-field, content-aware estimation of absorption and scattering coefficients of tissues, called Generative Adversarial Network Prediction of Optical Properties (GANPOP). Spatial frequency domain imaging is used ... More
Optimal low rank tensor recoveryJun 12 2019We investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries. In the Tucker decomposition framework, we show that the Riemannian optimization algorithm with initial value obtained from a ... More
MOPED: Efficient priors for scalable variational inference in Bayesian deep neural networksJun 12 2019Variational inference for Bayesian deep neural networks (DNNs) requires specifying priors and approximate posterior distributions for neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational ... More
Learning Curves for Deep Neural Networks: A Gaussian Field Theory PerspectiveJun 12 2019A series of recent works suggest that deep neural networks (DNNs), of fixed depth, are equivalent to certain Gaussian Processes (NNGP/NTK) in the highly over-parameterized regime (width or number-of-channels going to infinity). Other works suggest that ... More
Efficient Exploration via State Marginal MatchingJun 12 2019To solve tasks with sparse rewards, reinforcement learning algorithms must be equipped with suitable exploration techniques. However, it is unclear what underlying objective is being optimized by existing exploration algorithms, or how they can be altered ... More
Does Learning Require Memorization? A Short Tale about a Long TailJun 12 2019State-of-the-art results on image recognition tasks are achieved using over-parameterized learning algorithms that (nearly) perfectly fit the training set. This phenomenon is referred to as data interpolation or, informally, as memorization of the training ... More
GluonTS: Probabilistic Time Series Models in PythonJun 12 2019We introduce Gluon Time Series (GluonTS)\footnote{\url{https://gluon-ts.mxnet.io}}, a library for deep-learning-based time series modeling. GluonTS simplifies the development of and experimentation with time series models for common tasks such as forecasting ... More
Representation Learning for Words and EntitiesJun 12 2019This thesis presents new methods for unsupervised learning of distributed representations of words and entities from text and knowledge bases. The first algorithm presented in the thesis is a multi-view algorithm for learning representations of words ... More
Multitask Learning for Network Traffic ClassificationJun 12 2019Traffic classification has various applications in today's Internet, from resource allocation, billing and QoS purposes in ISPs to firewall and malware detection in clients. Classical machine learning algorithms and deep learning models have been widely ... More
Bootstrapping Upper Confidence BoundJun 12 2019Upper Confidence Bound (UCB) method is arguably the most celebrated one used in online decision making with partial information feedback. Existing techniques for constructing confidence bounds are typically built upon various concentration inequalities, ... More
A Model to Search for Synthesizable MoleculesJun 12 2019Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure. While such models allow one to generate molecules with desirable properties, they give no guarantees that the molecules ... More
Is Deep Learning an RG Flow?Jun 12 2019Although there has been a rapid development of practical applications, theoretical explanations of deep learning are in their infancy. A possible starting point suggests that deep learning performs a sophisticated coarse graining. Coarse graining is the ... More
Warping Resilient Time Series EmbeddingsJun 12 2019Time series are ubiquitous in real world problems and computing distance between two time series is often required in several learning tasks. Computing similarity between time series by ignoring variations in speed or warping is often encountered and ... More
Manifold Graph with Learned Prototypes for Semi-Supervised Image ClassificationJun 12 2019Recent advances in semi-supervised learning methods rely on estimating categories for unlabeled data using a model trained on the labeled data (pseudo-labeling) and using the unlabeled data for various consistency-based regularization. In this work, we ... More
Task Agnostic Continual Learning via Meta LearningJun 12 2019While neural networks are powerful function approximators, they suffer from catastrophic forgetting when the data distribution is not stationary. One particular formalism that studies learning under non-stationary distribution is provided by continual ... More
Parameterized Structured Pruning for Deep Neural NetworksJun 12 2019As a result of the growing size of Deep Neural Networks (DNNs), the gap to hardware capabilities in terms of memory and compute increases. To effectively compress DNNs, quantization and connection pruning are usually considered. However, unconstrained ... More
DCEF: Deep Collaborative Encoder Framework for Unsupervised ClusteringJun 12 2019Collaborative representation is a popular feature learning approach, which encoding process is assisted by variety types of information. In this paper, we propose a collaborative representation restricted Boltzmann Machine (CRRBM) for modeling binary ... More
Attention-based Multi-Input Deep Learning Architecture for Biological Activity Prediction: An Application in EGFR InhibitorsJun 12 2019Machine learning and deep learning have gained popularity and achieved immense success in Drug discovery in recent decades. Historically, machine learning and deep learning models were trained on either structural data or chemical properties by separated ... More
Regret Minimization for Reinforcement Learning by Evaluating the Optimal Bias FunctionJun 12 2019We present an algorithm based on the Optimism in the Face of Uncertainty (OFU) principle which is able to learn Reinforcement Learning (RL) modeled by Markov decision process (MDP) with finite state-action space efficiently. By evaluating the state-pair ... More
Leveraging Labeled and Unlabeled Data for Consistent Fair Binary ClassificationJun 12 2019We study the problem of fair binary classification using the notion of Equal Opportunity. It requires the true positive rate to distribute equally across the sensitive groups. Within this setting we show that the fair optimal classifier is obtained by ... More
Deep Smoothing of the Implied Volatility SurfaceJun 12 2019We present an artificial neural network (ANN) approach to value financial derivatives. Atypically to standard ANN applications, practitioners equally use option pricing models to validate market prices and to infer unobserved prices. Importantly, models ... More
Higher-Order Ranking and Link Prediction: From Closing Triangles to Closing Higher-Order MotifsJun 12 2019In this paper, we introduce the notion of motif closure and describe higher-order ranking and link prediction methods based on the notion of closing higher-order network motifs. The methods are fast and efficient for real-time ranking and link prediction-based ... More
Decoupling Gating from LinearityJun 12 2019ReLU neural-networks have been in the focus of many recent theoretical works, trying to explain their empirical success. Nonetheless, there is still a gap between current theoretical results and empirical observations, even in the case of shallow (one ... More
Fast Task Inference with Variational Intrinsic Successor FeaturesJun 12 2019It has been established that diverse behaviors spanning the controllable subspace of an Markov decision process can be trained by rewarding a policy for being distinguishable from other policies \citep{gregor2016variational, eysenbach2018diversity, warde2018unsupervised}. ... More
Who Will Win It? An In-game Win Probability Model for FootballJun 12 2019In-game win probability is a statistical metric that provides a sports team's likelihood of winning at any given point in a game, based on the performance of historical teams in the same situation. In-game win-probability models have been extensively ... More
Real-time Attention Based Look-alike Model for Recommender SystemJun 12 2019Recently, deep learning models play more and more important roles in contents recommender systems. However, although the performance of recommendations is greatly improved, the "Matthew effect" becomes increasingly evident. While the head contents get ... More
Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular NetworksJun 12 2019Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities. Hot spots easily appear in road intersections, where effective communication among vehicles is challenging. UAVs may serve as relays ... More
Structure-adaptive manifold estimationJun 12 2019We consider a problem of manifold estimation from noisy observations. We suggest a novel adaptive procedure, which simultaneously reconstructs a smooth manifold from the observations and estimates projectors onto the tangent spaces. Many manifold learning ... More
Neural Variational Inference For Estimating Uncertainty in Knowledge Graph EmbeddingsJun 12 2019Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data. While traditional variational methods derive an analytical approximation for the intractable distribution ... More
A Stratified Approach to Robustness for Randomly Smoothed ClassifiersJun 12 2019Strong theoretical guarantees of robustness can be given for ensembles of classifiers generated by input randomization. Specifically, an $\ell_2$ bounded adversary cannot alter the ensemble prediction generated by an isotropic Gaussian perturbation, where ... More
Partial Or Complete, That's The QuestionJun 12 2019For many structured learning tasks, the data annotation process is complex and costly. Existing annotation schemes usually aim at acquiring completely annotated structures, under the common perception that partial structures are of low quality and could ... More
Non-Parametric Calibration for ClassificationJun 12 2019Many applications for classification methods not only require high accuracy but also reliable estimation of predictive uncertainty. However, while many current classification frameworks, in particular deep neural network architectures, provide very good ... More
SPoC: Search-based Pseudocode to CodeJun 12 2019We consider the task of mapping pseudocode to long programs that are functionally correct. Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation. ... More
Improving Importance Weighted Auto-Encoders with Annealed Importance SamplingJun 12 2019Stochastic variational inference with an amortized inference model and the reparameterization trick has become a widely-used algorithm for learning latent variable models. Increasing the flexibility of approximate posterior distributions while maintaining ... More
Coresets for Gaussian Mixture Models of Any ShapeJun 12 2019An $\varepsilon$-coreset for a given set $D$ of $n$ points, is usually a small weighted set, such that querying the coreset \emph{provably} yields a $(1+\varepsilon)$-factor approximation to the original (full) dataset, for a given family of queries. ... More
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural NetworksJun 12 2019Tight estimation of the Lipschitz constant for deep neural networks (DNNs) is useful in many applications ranging from robustness certification of classifiers to stability analysis of closed-loop systems with reinforcement learning controllers. Existing ... More
High-resolution Markov state models for the dynamics of Trp-cage miniprotein constructed over slow folding modes identified by state-free reversible VAMPnetsJun 12 2019State-free reversible VAMPnets (SRVs) are a neural network-based framework capable of learning the leading eigenfunctions of the transfer operator of a dynamical system from trajectory data. In molecular dynamics simulations, these data-driven collective ... More
Using Small Proxy Datasets to Accelerate Hyperparameter SearchJun 12 2019One of the biggest bottlenecks in a machine learning workflow is waiting for models to train. Depending on the available computing resources, it can take days to weeks to train a neural network on a large dataset with many classes such as ImageNet. For ... More
Run-Time Efficient RNN Compression for Inference on Edge DevicesJun 12 2019Recurrent neural networks can be large and compute-intensive, yet many applications that benefit from RNNs run on small devices with very limited compute and storage capabilities while still having run-time constraints. As a result, there is a need for ... More
Multiple instance learning with graph neural networksJun 12 2019Multiple instance learning (MIL) aims to learn the mapping between a bag of instances and the bag-level label. In this paper, we propose a new end-to-end graph neural network (GNN) based algorithm for MIL: we treat each bag as a graph and use GNN to learn ... More
Communication-Efficient Accurate Statistical EstimationJun 12 2019When the data are stored in a distributed manner, direct application of traditional statistical inference procedures is often prohibitive due to communication cost and privacy concerns. This paper develops and investigates two Communication-Efficient ... More
Semi-flat minima and saddle points by embedding neural networks to overparameterizationJun 12 2019We theoretically study the landscape of the training error for neural networks in overparameterized cases. We consider three basic methods for embedding a network into a wider one with more hidden units, and discuss whether a minimum point of the narrower ... More
On regularization for a convolutional kernel in neural networksJun 12 2019Convolutional neural network is a very important model of deep learning. It can help avoid the exploding/vanishing gradient problem and improve the generalizability of a neural network if the singular values of the Jacobian of a layer are bounded around ... More
Statistical guarantees for local graph clusteringJun 11 2019Local graph clustering methods aim to find small clusters in very large graphs. These methods take as input a graph and a seed node, and they return as output a good cluster in a running time that depends on the size of the output cluster but that is ... More
Reinforcement Learning for Integer Programming: Learning to CutJun 11 2019Integer programming (IP) is a general optimization framework widely applicable to a variety of unstructured and structured problems arising in, e.g., scheduling, production planning, and graph optimization. As IP models many provably hard to solve problems, ... More
A Closer Look at the Optimization Landscapes of Generative Adversarial NetworksJun 11 2019Generative adversarial networks have been very successful in generative modeling, however they remain relatively hard to optimize compared to standard deep neural networks. In this paper, we try to gain insight into the optimization of GANs by looking ... More
Discrepancy, Coresets, and Sketches in Machine LearningJun 11 2019This paper defines the notion of class discrepancy for families of functions. It shows that low discrepancy classes admit small offline and streaming coresets. We provide general techniques for bounding the class discrepancy of machine learning problems. ... More
ADASS: Adaptive Sample Selection for Training AccelerationJun 11 2019Stochastic gradient decent~(SGD) and its variants, including some accelerated variants, have become popular for training in machine learning. However, in all existing SGD and its variants, the sample size in each iteration~(epoch) of training is the same ... More
Medium-Term Load Forecasting Using Support Vector Regression, Feature Selection, and Symbiotic Organism Search OptimizationJun 11 2019An accurate load forecasting has always been one of the main indispensable parts in the operation and planning of power systems. Among different time horizons of forecasting, while short-term load forecasting (STLF) and long-term load forecasting (LTLF) ... More
Position-aware Graph Neural NetworksJun 11 2019Learning node embeddings that capture a node's position within the broader graph structure is crucial for many prediction tasks on graphs. However, existing Graph Neural Network (GNN) architectures have limited power in capturing the position/location ... More
Towards Inverse Reinforcement Learning for Limit Order Book DynamicsJun 11 2019Multi-agent learning is a promising method to simulate aggregate competitive behaviour in finance. Learning expert agents' reward functions through their external demonstrations is hence particularly relevant for subsequent design of realistic agent-based ... More
Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free InferenceJun 11 2019In this work, we propose to quantize all parts of standard classification networks and replace the activation-weight--multiply step with a simple table-based lookup. This approach results in networks that are free of floating-point operations and free ... More
Power Gradient DescentJun 11 2019The development of machine learning is promoting the search for fast and stable minimization algorithms. To this end, we suggest a change in the current gradient descent methods that should speed up the motion in flat regions and slow it down in steep ... More
Stability of Graph Scattering TransformsJun 11 2019Scattering transforms are non-trainable deep convolutional architectures that exploit the multi-scale resolution of a wavelet filter bank to obtain an appropriate representation of data. More importantly, they are proven invariant to translations, and ... More
Issues with post-hoc counterfactual explanations: a discussionJun 11 2019Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier. However, the assumptions they make about the data and the classifier make them unreliable ... More
Deep Forward-Backward SDEs for Min-max ControlJun 11 2019This paper presents a novel approach to numerically solve stochastic differential games for nonlinear systems. The proposed approach relies on the nonlinear Feynman-Kac theorem that establishes a connection between parabolic deterministic partial differential ... More
Deep 2FBSDEs for Systems with Control Multiplicative NoiseJun 11 2019We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations. Such PDEs arise when one considers stochastic dynamics ... More
Large Scale Structure of Neural Network Loss LandscapesJun 11 2019There are many surprising and perhaps counter-intuitive properties of optimization of deep neural networks. We propose and experimentally verify a unified phenomenological model of the loss landscape that incorporates many of them. High dimensionality ... More
Data-Free Quantization through Weight Equalization and Bias CorrectionJun 11 2019We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization ... More
Graph Convolutional Transformer: Learning the Graphical Structure of Electronic Health RecordsJun 11 2019Effective modeling of electronic health records (EHR) is rapidly becoming an important topic in both academia and industry. A recent study showed that utilizing the graphical structure underlying EHR data (e.g. relationship between diagnoses and treatments) ... More
From Fully Supervised to Zero Shot Settings for Twitter Hashtag RecommendationJun 11 2019We propose a comprehensive end-to-end pipeline for Twitter hashtags recommendation system including data collection, supervised training setting and zero shot training setting. In the supervised training setting, we have proposed and compared the performance ... More
Deep Learning based Emotion Recognition System Using Speech Features and TranscriptionsJun 11 2019This paper proposes a speech emotion recognition method based on speech features and speech transcriptions (text). Speech features such as Spectrogram and Mel-frequency Cepstral Coefficients (MFCC) help retain emotion-related low-level characteristics ... More
Communication and Memory Efficient Testing of Discrete DistributionsJun 11 2019We study distribution testing with communication and memory constraints in the following computational models: (1) The {\em one-pass streaming model} where the goal is to minimize the sample complexity of the protocol subject to a memory constraint, and ... More
Fast and Accurate Least-Mean-Squares SolversJun 11 2019Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems, but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations. ... More
Calibration, Entropy Rates, and Memory in Language ModelsJun 11 2019Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing. Towards this end, we present a calibration-based approach to measure long-term discrepancies between a generative sequence ... More
Variance-reduced $Q$-learning is minimax optimalJun 11 2019We introduce and analyze a form of variance-reduced $Q$-learning. For $\gamma$-discounted MDPs with finite state space $\mathcal{X}$ and action space $\mathcal{U}$, we prove that it yields an $\epsilon$-accurate estimate of the optimal $Q$-function in ... More
On Single Source Robustness in Deep Fusion ModelsJun 11 2019Algorithms that fuse multiple input sources benefit from both complementary and shared information. Shared information may provide robustness to faulty or noisy inputs, which is indispensable for safety-critical applications like self-driving cars. We ... More
An Improved Analysis of Training Over-parameterized Deep Neural NetworksJun 11 2019A recent line of research has shown that gradient-based algorithms with random initialization can converge to the global minima of the training loss for over-parameterized (i.e., sufficiently wide) deep neural networks. However, the condition on the width ... More
A Hybrid Approach Between Adversarial Generative Networks and Actor-Critic Policy Gradient for Low Rate High-Resolution Image CompressionJun 11 2019Image compression is an essential approach for decreasing the size in bytes of the image without deteriorating the quality of it. Typically, classic algorithms are used but recently deep-learning has been successfully applied. In this work, is presented ... More
Efficient Kernel-based Subsequence Search for User Identification from Walking ActivityJun 11 2019This paper presents an efficient approach for subsequence search in data streams. The problem consists in identifying coherent repetitions of a given reference time-series, eventually multi-variate, within a longer data stream. Dynamic Time Warping (DTW) ... More
A Taxonomy of Channel Pruning Signals in CNNsJun 11 2019Convolutional neural networks (CNNs) are widely used for classification problems. However, they often require large amounts of computation and memory which are not readily available in resource constrained systems. Pruning unimportant parameters from ... More
Learning Selection Masks for Deep Neural NetworksJun 11 2019Data have often to be moved between servers and clients during the inference phase. For instance, modern virtual assistants collect data on mobile devices and the data are sent to remote servers for the analysis. A related scenario is that clients have ... More
Extracting Interpretable Concept-Based Decision Trees from CNNsJun 11 2019In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through a ... More
Faster Algorithms for High-Dimensional Robust Covariance EstimationJun 11 2019We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted. Recent work gave the first polynomial time algorithms for this problem with near-optimal ... More
Stable Rank Normalization for Improved Generalization in Neural Networks and GANsJun 11 2019Jun 12 2019Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the ... More
Stable Rank Normalization for Improved Generalization in Neural Networks and GANsJun 11 2019Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator). ... More
Adaptive Neural Signal Detection for Massive MIMOJun 11 2019Symbol detection for Massive Multiple-Input Multiple-Output (MIMO) is a challenging problem for which traditional algorithms are either impractical or suffer from performance limitations. Several recently proposed learning-based approaches achieve promising ... More