total 21565took 0.12s

Sequence-Level Knowledge DistillationJun 25 2016Sep 22 2016Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider ... More

A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language ProcessingJan 23 2014Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the ... More

Lie-Access Neural Turing MachinesNov 09 2016Recent work has demonstrated the effectiveness of employing explicit external memory structures in conjunction with deep neural models for algorithmic learning (Graves et al. 2014; Weston et al. 2014). These models utilize differentiable versions of traditional ... More

Sequence-to-Sequence Learning as Beam-Search OptimizationJun 09 2016Nov 10 2016Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable ... More

Simple Unsupervised Summarization by Contextual MatchingJul 31 2019We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts ... More

Lie-Access Neural Turing MachinesNov 09 2016Mar 05 2017External neural memory structures have recently become a popular tool for algorithmic deep learning (Graves et al. 2014, Weston et al. 2014). These models generally utilize differentiable versions of traditional discrete memory-access structures (random ... More

Sequence-to-Sequence Learning as Beam-Search OptimizationJun 09 2016Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable ... More

Latent Normalizing Flows for Discrete SequencesJan 29 2019Normalizing flows have been shown to be a powerful class of generative models for continuous random variables, giving both strong performance and the potential for non-autoregressive generation. These benefits are also desired when modeling discrete random ... More

A Tutorial on Deep Latent Variable Models of Natural LanguageDec 17 2018Dec 18 2018There has been much recent, exciting work on combining the complementary strengths of latent variable models and deep learning. Latent variable modeling makes it easy to explicitly specify model constraints through conditional independence properties, ... More

Compound Probabilistic Context-Free Grammars for Grammar InductionJun 24 2019Jul 08 2019We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule ... More

Bottom-Up Abstractive SummarizationAug 31 2018Oct 09 2018Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content ... More

Semiclassical quantisation for a bosonic atom-molecule conversion systemMay 13 2015Jul 27 2015We consider a simple quantum model of atom-molecule conversion where bosonic atoms can combine into diatomic molecules and vice versa. The many-particle system can be expressed in terms of the generators a deformed $SU(2)$ algebra, and the mean-field ... More

Latent Normalizing Flows for Discrete SequencesJan 29 2019Feb 13 2019Normalizing flows have been shown to be a powerful class of generative models for continuous random variables, giving both strong performance and the potential for non-autoregressive generation. These benefits are also desired when modeling discrete random ... More

Compound Probabilistic Context-Free Grammars for Grammar InductionJun 24 2019We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule ... More

What You Get Is What You See: A Visual Markup DecompilerSep 16 2016Building on recent advances in image caption generation and optical character recognition (OCR), we present a general-purpose, deep learning-based system to decompile an image into presentational markup. While this task is a well-studied problem in OCR, ... More

Compound Probabilistic Context-Free Grammars for Grammar InductionJun 24 2019Jul 13 2019We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule ... More

Latent Normalizing Flows for Discrete SequencesJan 29 2019Jun 04 2019Normalizing flows are a powerful class of generative models for continuous random variables, showing both strong model flexibility and the potential for non-autoregressive generation. These benefits are also desired when modeling discrete random variables ... More

Compound Probabilistic Context-Free Grammars for Grammar InductionJun 24 2019Jul 24 2019We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule ... More

A Neural Attention Model for Abstractive Sentence SummarizationSep 02 2015Sep 03 2015Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a ... More

Compound Probabilistic Context-Free Grammars for Grammar InductionJun 24 2019Aug 14 2019We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our context-free rule ... More

Propagation of Gaussian beams in the presence of gain and lossJan 28 2016We consider the propagation of Gaussian beams in a waveguide with gain and loss in the paraxial approximation governed by the Schr\"odinger equation. We derive equations of motion for the beam in the semiclassical limit that are valid when the waveguide ... More

Entity Tracking Improves Cloze-style Reading ComprehensionOct 05 2018Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, ... More

GLTR: Statistical Detection and Visualization of Generated TextJun 10 2019The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, ... More

On Orbits of Order Ideals of Minuscule Posets II: HomomesySep 27 2015The Fon-Der-Flaass action partitions the order ideals of a poset into disjoint orbits. For a product of two chains, Propp and Roby observed --- across orbits --- the mean cardinality of the order ideals within an orbit to be invariant. That this phenomenon, ... More

Classical-quantum correspondence in bosonic two-mode conversion systems: polynomial algebras and Kummer shapesOct 06 2015Feb 04 2016Bosonic quantum conversion systems can be modeled by many-particle single-mode Hamiltonians describing a conversion of $n$ molecules of type A into $m$ molecules of type B and vice versa. These Hamiltonians are analyzed in terms of generators of a polynomially ... More

Structured Attention NetworksFeb 03 2017Feb 16 2017Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, ... More

End-to-End Content and Plan Selection for Data-to-Text GenerationOct 10 2018Learning to generate fluent natural language from structured data with neural networks has become an common approach for NLG. This problem can be challenging when the form of the structured data varies between examples. This paper presents a survey of ... More

Learning Global Features for Coreference ResolutionApr 11 2016There is compelling evidence that coreference prediction would benefit from modeling global information about entity-clusters. Yet, state-of-the-art performance can be achieved with systems treating each mention prediction independently, which we attribute ... More

Classical and quantum dynamics in the (non-Hermitian) Swanson oscillatorSep 23 2014Dec 21 2014The non-Hermitian quadratic oscillator studied by Swanson is one of the popular $PT$-symmetric model systems. Here a full classical description of its dynamics is derived using recently developed metriplectic flow equations, which combine the classical ... More

Word Ordering Without SyntaxApr 28 2016Sep 24 2016Recent work on word ordering has argued that syntactic structure is important, or even required, for effectively recovering the order of a sentence. We find that, in fact, an n-gram language model with a simple heuristic gives strong results on this task. ... More

Sentence-Level Grammatical Error Identification as Sequence-to-Sequence CorrectionApr 16 2016We demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder models can be ... More

Image-to-Markup Generation with Coarse-to-Fine AttentionSep 16 2016Jun 13 2017We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our method is evaluated in the context of image-to-LaTeX generation, and we introduce a new dataset of real-world ... More

On the Flip Side: Identifying Counterexamples in Visual Question AnsweringJun 03 2018Jul 24 2018Visual question answering (VQA) models respond to open-ended natural language questions about images. While VQA is an increasingly popular area of research, it is unclear to what extent current VQA architectures learn key semantic distinctions between ... More

Character-Aware Neural Language ModelsAug 26 2015Dec 01 2015We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to ... More

LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural NetworksJun 23 2016Oct 30 2017Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Apr 15 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Compositions of consistent systems of rank one discrete valuation ringsSep 26 2008Let V be a rank one discrete valuation ring (DVR) on a field F and let L/F be a finite separable algebraic field extension with [L:F] = m. The integral closure of V in L is a Dedekind domain that encodes the following invariants: (i) the number of extensions ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Apr 09 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Towards AI-Complete Question Answering: A Set of Prerequisite Toy TasksFeb 19 2015Dec 31 2015One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Jun 12 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Tensor Variable Elimination for Plated Factor GraphsFeb 08 2019May 17 2019A wide class of machine learning algorithms can be reduced to variable elimination on factor graphs. While factor graphs provide a unifying notation for these algorithms, they do not provide a compact way to express repeated structure when compared to ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Aug 05 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Latent Alignment and Variational AttentionJul 10 2018Nov 07 2018Neural attention has become central to many state-of-the-art models in natural language processing and related domains. Attention networks are an easy-to-train and effective method for softly simulating alignment; however, the approach does not marginalize ... More

Randomized Verblunsky Parameters in Steklov's ProblemApr 06 2017May 29 2017We consider randomized Verblunsky parameters for orthogonal polynomials on the unit circle as they relate to the problem of Steklov, bounding the polynomials' uniform norm independent of $n$.

Semi-Amortized Variational AutoencodersFeb 07 2018Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests ... More

Semi-Amortized Variational AutoencodersFeb 07 2018Jul 23 2018Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests ... More

OpenNMT: Open-source Toolkit for Neural Machine TranslationSep 12 2017We introduce an open-source toolkit for neural machine translation (NMT) to support research into model architectures, feature representations, and source modalities, while maintaining competitive performance, modularity and reasonable training requirements. ... More

OpenNMT: Open-Source Toolkit for Neural Machine TranslationJan 10 2017Mar 06 2017We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, ... More

Visual Interaction with Deep Learning Models through Collaborative Semantic InferenceJul 24 2019Automation of tasks can have critical consequences when humans lose agency over decision processes. Deep learning models are particularly susceptible since current black-box approaches lack explainable reasoning. We argue that both the visual interface ... More

Visual Analysis of Hidden State Dynamics in Recurrent Neural NetworksJun 23 2016Recurrent neural networks, and in particular long short-term memory networks (LSTMs), are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better ... More

Tensor Variable Elimination for Plated Factor GraphsFeb 08 2019A wide class of machine learning algorithms can be reduced to variable elimination on factor graphs. While factor graphs provide a unifying notation for these algorithms, they do not provide a compact way to express repeated structure when compared to ... More

Unsupervised Recurrent Neural Network GrammarsApr 07 2019Jun 14 2019Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs achieve strong language ... More

Avoiding Latent Variable Collapse With Generative Skip ModelsJul 12 2018Jan 30 2019Variational autoencoders learn distributions of high-dimensional data. They model data with a deep latent-variable model and then fit the model by maximizing a lower bound of the log marginal likelihood. VAEs can capture complex distributions, but they ... More

Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence ModelsApr 25 2018Oct 16 2018Neural Sequence-to-Sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work in a five stage blackbox process that involves encoding ... More

OpenNMT: Neural Machine Translation ToolkitMay 28 2018OpenNMT is an open-source toolkit for neural machine translation (NMT). The system prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, ... More

Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language InferenceJul 09 2019Natural Language Inference (NLI) datasets often contain hypothesis-only biases---artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models ... More

On Adversarial Removal of Hypothesis-only Bias in Natural Language InferenceJul 09 2019Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore sensitive biases and spurious correlations in data. We evaluate whether adversarial learning can be ... More

Encoder-Agnostic Adaptation for Conditional Language GenerationAug 19 2019Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However it is an open-question ... More

Weightless: Lossy Weight Encoding For Deep Neural Network CompressionNov 13 2017The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning ... More

Cyclic Sieving and Plethysm CoefficientsAug 27 2014A combinatorial expression for the coefficient of the Schur function $s_{\lambda}$ in the expansion of the plethysm $p_{n/d}^d \circ s_{\mu}$ is given for all $d$ dividing $n$ for the cases in which $n=2$ or $\lambda$ is rectangular. In these cases, the ... More

Computing the Lusztig--Vogan BijectionNov 01 2017Let $G$ be a connected complex reductive algebraic group with Lie algebra $\mathfrak{g}$. The Lusztig--Vogan bijection relates two bases for the bounded derived category of $G$-equivariant coherent sheaves on the nilpotent cone $\mathcal{N}$ of $\mathfrak{g}$. ... More

Finite Sample Analysis of Approximate Message Passing AlgorithmsJun 06 2016Mar 16 2018Approximate message passing (AMP) refers to a class of efficient algorithms for statistical estimation in high-dimensional problems such as compressed sensing and low-rank matrix estimation. This paper analyzes the performance of AMP in the regime where ... More

On Schur parameters in Steklov's problemJun 14 2016We study the recursion (aka Schur) parameters for monic polynomials orthogonal on the unit circle with respect to a weight which provides negative answer to the conjecture of Steklov.

On Order Ideals of Minuscule Posets III: The CDE PropertyJul 27 2016Recent work of Hopkins establishes that the lattice of order ideals of a minuscule poset satisfies the coincidental down-degree expectations property of Reiner, Tenner, and Yong. His approach appeals to the classification of minuscule posets. A uniform ... More

Cyclic Sieving and Plethysm CoefficientsAug 27 2014Apr 05 2017A combinatorial expression for the coefficient of the Schur function $s_{\lambda}$ in the expansion of the plethysm $p_{n/d}^d \circ s_{\mu}$ is given for all $d$ dividing $n$ for the cases in which $n=2$ or $\lambda$ is rectangular. In these cases, the ... More

Finite Sample Analysis of Approximate Message PassingJun 06 2016This paper analyzes the performance of Approximate Message Passing (AMP) in the regime where the problem dimension is large but finite. We consider the setting of high-dimensional regression, where the goal is to estimate a high-dimensional vector $\beta_0$ ... More

The Error Probability of Sparse Superposition Codes with Approximate Message Passing DecodingDec 19 2017Oct 17 2018Sparse superposition codes, or sparse regression codes (SPARCs), are a recent class of codes for reliable communication over the AWGN channel at rates approaching the channel capacity. Approximate5message passing (AMP) decoding, a computationally efficient ... More

Orthogonal polynomials on the circle for the weight w satisfying conditions: w,1/w in BMO(T)Jan 13 2016Nov 02 2016In the case when the weight and its inverse belong to BMO(T), we prove the asymptotics of the monic orthogonal polynomials in L^p, 2<p<p_0. Immediate applications include the estimates on the uniform norm and asymptotics for the polynomial entropy.

The Error Probability of Sparse Superposition Codes with Approximate Message Passing DecodingDec 19 2017Apr 22 2019Sparse superposition codes, or sparse regression codes (SPARCs), are a recent class of codes for reliable communication over the AWGN channel at rates approaching the channel capacity. Approximate message passing (AMP) decoding, a computationally efficient ... More

The Soft-X-Ray Spectral Shape of X-Ray-Weak SeyfertsJul 27 1995(I) We observed eight Seyfert~2s and two X--ray--weak Seyfert~1/QSOs with the ROSAT PSPC, and one Seyfert~2 with the ROSAT HRI. These targets were selected from the Extended 12\um\ Galaxy Sample. (II) Both Seyfert~1/QSOs vary by factors of 1.5---2. The ... More

A general formalism of two-dimensional lattice potential on beam transverse plane for studying channeling radiationSep 14 2017To study channeling radiation produced by an ultra-relativistic electron beam channeling through a single crystal, a lattice potential of the crystal is required for solving the transverse motion of beam electrons under the influence of the crystal lattice. ... More

On Orbits of Order Ideals of Minuscule PosetsAug 26 2011Jun 27 2012An action on order ideals of posets considered by Fon-Der-Flaass is analyzed in the case of posets arising from minuscule representations of complex simple Lie algebras. For these minuscule posets, it is shown that the Fon-Der-Flaass action exhibits the ... More

Data Visualization on Day One: Bringing Big Ideas into Intro Stats Early and OftenMay 23 2017In a world awash with data, the ability to think and compute with data has become an important skill for students in many fields. For that reason, inclusion of some level of statistical computing in many introductory-level courses has grown more common ... More

An Analysis of State Evolution for Approximate Message Passing with Side InformationFeb 01 2019A common goal in many research areas is to reconstruct an unknown signal x from noisy linear measurements. Approximate message passing (AMP) is a class of low-complexity algorithms for efficiently solving such high-dimensional regression tasks. Often, ... More

Capacity-achieving Sparse Superposition Codes via Approximate Message Passing DecodingJan 23 2015Aug 02 2016Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse ... More

Analysis of Approximate Message Passing with a Class of Non-Separable DenoisersMay 09 2017Aug 13 2017Approximate message passing (AMP) is a class of efficient algorithms for solving high-dimensional linear regression tasks where one wishes to recover an unknown signal \beta_0 from noisy, linear measurements y = A \beta_0 + w. When applying a separable ... More

The Extended 12 Micron Galaxy SampleJun 17 1993We have selected an all--sky sample of 893 galaxies from the IRAS FSC--2, defined by a total (ADDSCAN) 12um flux limit of 0.22~Jy. Completeness is verified to 0.30~Jy, below which we have quantified the incompleteness down to 0.22~Jy for our statistical ... More

Spatially Coupled Sparse Regression Codes: Design and State Evolution AnalysisJan 05 2018Apr 26 2018We consider the design and analysis of spatially coupled sparse regression codes (SC-SPARCs), which were recently introduced by Barbier et al. for efficient communication over the additive white Gaussian noise channel. SC-SPARCs can be efficiently decoded ... More

Capacity-achieving Sparse Superposition Codes via Approximate Message Passing DecodingJan 23 2015Mar 11 2017Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse ... More

Analysis of Approximate Message Passing with Non-Separable Denoisers and Markov Random Field PriorsMay 10 2019Approximate message passing (AMP) is a class of low-complexity, scalable algorithms for solving high-dimensional linear regression tasks where one wishes to recover an unknown signal from noisy, linear measurements. AMP is an iterative algorithm that ... More

Kazhdan-Laumon representations of finite Chevalley groups, character sheaves and some generalization of the Lefschetz-Verdeier trace formulaOct 01 1998In this paper we investigate the representations of reductive groups over a finite field, introduced in 1987 by D.Kazhdan and G.Laumon. We show that generically these representations are irreducible and that their character is equal to the function obtained ... More

$L^2-$interpolation with error and size of spectraJun 18 2008Given a compact set $S$ and a uniformly discrete sequence $\La$, we show that "approximate interpolation" of delta--functions on $\La$ by a bounded sequence of $L^2-$functions with spectra in $S$ implies an estimate on measure of $S$ through the density ... More

Homological projective duality for quadricsFeb 26 2019We show that over an algebraically closed field of characteristic not equal to 2, homological projective duality for smooth quadric hypersurfaces and for double covers of projective spaces branched over smooth quadric hypersurfaces is a combination of ... More

Categorical cones and quadratic homological projective dualityFeb 26 2019We introduce the notion of a categorical cone, which provides a categorification of the classical cone over a projective variety, and use our work on categorical joins to describe its behavior under homological projective duality. In particular, our construction ... More

Categorical joinsMar 31 2018Feb 26 2019We introduce the notion of a categorical join, which can be thought of as a categorification of the classical join of two projective varieties. This notion is in the spirit of homological projective duality, which categorifies classical projective duality. ... More

Fast gradient descent method for convex optimization problems with an oracle that generates a $(δ,L)$-model of a function in a requested pointNov 07 2017Feb 02 2019In this article we propose a new concept of a $(\delta,L)$-model of a function which generalizes the concept of the $(\delta,L)$-oracle (Devolder-Glineur-Nesterov). Using this concept we describe the gradient descent method and the fast gradient descent ... More

Moments of random sums and Robbins' problem of optimal stoppingJul 17 2011Robbins' problem of optimal stopping asks one to minimise the expected {\it rank} of observation chosen by some nonanticipating stopping rule. We settle a conjecture regarding the {\it value} of the stopped variable under the rule optimal in the sense ... More

Governing Singularities of Schubert VarietiesMar 12 2006Jun 29 2006We present a combinatorial and computational commutative algebra methodology for studying singularities of Schubert varieties of flag manifolds. We define the combinatorial notion of *interval pattern avoidance*. For "reasonable" invariants P of singularities, ... More

Scattering theory and Banach space valued singular integralsNov 28 2012We give a new sufficient condition for existence and completeness of wave operators in abstract scattering theory. This condition generalises both trace class and smooth approaches to scattering theory. Our construction is based on estimates for the Cauchy ... More

Subtle Characteristic ClassesJan 26 2014We construct new subtle Stiefel--Whitney classes of quadratic forms. These classes are much more informative than the ones introduced by Milnor. In particular, they see all the powers of the fundamental ideal of the Witt ring, contain the Arason invariant ... More

Diagonalization of the Finite Hilbert Transform on two adjacent intervalsNov 06 2015We study the interior problem of tomography. The starting point is the Gelfand-Graev formula, which converts the tomographic data into the finite Hilbert transform (FHT) of an unknown function $f$ along a collection of lines. Pick one such line, call ... More

Cauchy independent measures and super-additivity of analytic capacityNov 12 2012We show that, given a family of discs centered at a nice curve, the analytic capacities of arbitrary subsets of these discs add up. However we need that the discs in question would be slightly separated, and it is not clear whether the separation condition ... More

Random "dyadic" lattice in geometrically doubling metric space and $A_2$ conjectureMar 27 2011Apr 21 2011Recently three proofs of the $A_2$-conjecture were obtained. All of them are "glued" to euclidian space and a special choice of one random dyadic lattice. We build a random "dyadic" lattice in any doubling metric space which have properties that are enough ... More

Reachability in Higher-Order-CountersJun 05 2013Higher-order counter automata (\HOCS) can be either seen as a restriction of higher-order pushdown automata (\HOPS) to a unary stack alphabet, or as an extension of counter automata to higher levels. We distinguish two principal kinds of \HOCS: those ... More

Paramagnetic tunneling state concept of the low-temperature magnetic anomalies of multicomponent insulating glassesMar 17 2006A generalized tunneling model of multicomponent insulating glasses is formulated, considering tunneling states to be paramagnetic centers of the electronic hole type. The expression for magnetic field dependent contribution into the free energy is obtained. ... More

Exceptional collections on isotropic GrassmanniansOct 25 2011Sep 13 2015We introduce a new construction of exceptional objects in the derived category of coherent sheaves on a compact homogeneous space of a semisimple algebraic group and show that it produces exceptional collections of the length equal to the rank of the ... More

Magnetic susceptibility of the quark condensate via holographyFeb 11 2009Jan 13 2010We discuss the holographic derivation of the magnetic susceptibility of the quark condensate. It is found that the susceptibility emerges upon the account of the Chern-Simons term in the holographic action. We demonstrate that Vainshtein's relation is ... More

On Beurling's sampling theorem in $\R^n$Jun 03 2011We present an elementary proof of the classical Beurling sampling theorem which gives a sufficient condition for sampling of multi-dimensional band-limited functions.

Regenerative compositions in the case of slow variation: A renewal theory approachSep 27 2011Sep 01 2012A regenerative composition structure is a sequence of ordered partitions derived from the range of a subordinator by a natural sampling procedure. In this paper, we extend previous studies Barbour and Gnedin (2006), Gnedin, Iksanov and Marynych (2010) ... More