Results for "Soufiane Lakhlifi"

total 27took 0.04s
An exact extremal result for tournaments and 4-uniform hypergraphsFeb 21 2018In this paper, we address the following problem due to Frankl and F\"uredi (1984). What is the maximum number of hyperedges in an $r$-uniform hypergraph with $n$ vertices, such that every set of $r+1$ vertices contains $0$ or exactly $2$ hyperedges? They ... More
Matricial characterization of tournaments with maximum number of diamondsJun 11 2019A diamond is a $4$-tournament which consists of a vertex dominating or dominated by a $3$-cycle. Assuming the existence of skew-conference matrices, we give a complete characterization of $n$-tournaments with the maximum number of diamonds when $n\equiv0\pmod{4}$ ... More
On the overestimation of the largest eigenvalue of a covariance matrixAug 11 2017In this paper, we use a new approach to prove that the largest eigenvalue of the sample covariance matrix of a normally distributed vector is bigger than the true largest eigenvalue with probability 1 when the dimension is infinite. We prove a similar ... More
Cleaning the correlation matrix with a denoising autoencoderAug 09 2017In this paper, we use an adjusted autoencoder to estimate the true eigenvalues of the population correlation matrix from the sample correlation matrix when the number of samples is small. We show that the model outperforms the Rotational Invariant Estimator ... More
Skorohod equation and BSDE's with two reflecting barriersOct 03 2011We solve a class of doubly reflected backward stochastic differential equation whose generator depends on the resistance due to reflections, which extend the recent work of Qian and Xu on reflected BSDE with one barrier. We then obtain the existence and ... More
Discrete time approximation of decoupled Forward-Backward SDE driven by pure jump Lévy-processesOct 23 2011We present a new algorithms to discretize a decoupled forward backward stochastic differential equations driven by pure jump L\'evy process (FBSDEL in short). The method is built in two steps. Firstly, we approximate the FBSDEL by a forward backward stochastic ... More
Van't Hoff law for temperature dependent Langmuir constants in clathrate hydrate nanocavitiesMar 16 2015This work gives a van't Hoff law expression of Langmuir constants of different species for determining their occupancy in the nanocavities of clathrate hydrates. The van't Hoff law's parameters are derived from a fit with Langmuir constants calculated ... More
Strong envelope and strong supermartingale: application to reflected bsdesDec 01 2011Jan 04 2016We provide several characterizations to identify Strong envelop (for bounded measurable process) and Strong super-martingale (for non-negative right upper semi-continuous process of the class $\Dc$). As examples of application, we prove existence and ... More
Controllability of Neutral Stochastic Functional Integro-Differential Equations Driven by Fractional Brownian Motion with Hurst Parameter Lesser than 1/2Sep 22 2018In this article we investigate the controllability for neutral stochastic functional integro-differential equations with finite delay, driven by a fractional Brownian motion with Hurst parameter lesser than $1/2$ in a Hilbert space. We employ the theory ... More
A note on Harnack and Transportation inequalities For Stochastic Differential Equations with reflectionsMay 03 2019We establish transportation cost inequalities, with respect to the uniform and $L_2$-metric, on the path space of continuous functions, for laws of solutions of stochastic differential equations with reflections. We also consider the case of stochastic ... More
Optimal switching problem and system of reflected multi-dimensional FBSDEs with random terminal timeJun 15 2012Jul 02 2012In this paper, we study the solvability of a class of multi-dimensional forward backward stochastic differential equations (FBSDEs) with oblique reflection and unbounded stopping time. Under some mild assumptions on the coefficients in such FBSDE, the ... More
Quasilinear elliptic problem without Ambrosetti-Rabinowitz condition involving a potential in Musielak-Sobolev spaces settingJul 22 2019In this paper, we consider the following quasilinear elliptic problem with potential $$(P) \begin{cases} -\mbox{div}(\phi(x,|\nabla u|)\nabla u)+ V(x)|u|^{q(x)-2}u= f(x,u) & \ \ \mbox{ in }\Omega, u=0 & \ \ \mbox{ on } \partial\Omega, \end{cases}$$ where ... More
Berry-Esséen bounds and almost sure CLT for the quadratic variation of the bifractional Brownian motionMar 13 2012Mar 27 2012Let $B$ be a bifractional Brownian motion with parameters $H\in (0, 1)$ and $K\in(0,1]$. For any $n\geq1$, set $Z_n =\sum_{i=0}^{n-1}\big[n^{2HK}(B_{(i+1)/n}-B_{i/n})^2-\E((B_{i+1}-B_{i})^2)\big]$. We use the Malliavin calculus and the so-called Stein's ... More
On the Impact of the Activation Function on Deep Neural Networks TrainingFeb 19 2019The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propagation and ... More
On the Impact of the Activation Function on Deep Neural Networks TrainingFeb 19 2019May 26 2019The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propagation and ... More
On the Selection of Initialization and Activation Function for Deep Neural NetworksMay 21 2018Oct 07 2018The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure. An inappropriate selection can lead to the loss of information of the input during forward propagation and ... More
Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent KernelMay 31 2019Stochastic Gradient Descent (SGD) is widely used to train deep neural networks. However, few theoretical results on the training dynamics of SGD are available. Recent work by Jacot et al. (2018) has showed that training a neural network of any kind with ... More
On the abundances of noble and biologically relevant gases in Lake Vostok, AntarcticaJan 10 2013Motivated by the possibility of comparing theoretical predictions of Lake Vostok's composition with future in situ measurements, we investigate the composition of clathrates that are expected to form in this environment from the air supplied to the lake ... More
Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent KernelMay 31 2019Jun 07 2019Stochastic Gradient Descent (SGD) is widely used to train deep neural networks. However, few theoretical results on the training dynamics of SGD are available. Recent work by Jacot et al. (2018) has showed that training a neural network of any kind with ... More
A regularization scheme for structured output problems: an application to facial landmark detectionApr 28 2015Nov 18 2016Facial landmark detection is an important step for many perception tasks. In this paper, we address facial landmark detection as a structured output regression problem, where we exploit the strong dependencies that lie between the facial landmarks. For ... More
Deep Neural Networks Regularization for Structured Output PredictionApr 28 2015Oct 30 2017A deep neural network model is a powerful framework for learning representations. Usually, it is used to learn the relation $x \to y$ by exploiting the regularities in the input $x$. In structured output prediction problems, $y$ is multi-dimensional and ... More
Facial landmark detection using structured output deep neural networksApr 28 2015Sep 24 2015Facial landmark detection is an important step for many perception tasks such as face recognition and facial analysis. Regression-based methods have shown a large success. In particular, deep neural networks (DNN) has demonstrated a strong capability ... More
Neural Networks Regularization Through Class-wise Invariant Representation LearningSep 06 2017Dec 22 2017Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few ... More
Weakly Supervised Localization using Min-Max Entropy: an Interpretable FrameworkJul 25 2019Aug 31 2019Weakly supervised object localization (WSOL) models aim to locate objects of interest in an image after being trained only on data with coarse image level labels. Deep learning models for WSOL rely typically on convolutional attention maps with no constraints ... More
Weakly Supervised Object Localization using Min-Max Entropy: an Interpretable FrameworkJul 25 2019Weakly supervised object localization (WSOL) models aim to locate objects of interest in an image after being trained only on data with coarse image level labels. Deep learning models for WSOL rely typically on convolutional attention maps with no constraints ... More
Weakly Supervised Object Localization using Min-Max Entropy: an Interpretable FrameworkJul 25 2019Aug 12 2019Weakly supervised object localization (WSOL) models aim to locate objects of interest in an image after being trained only on data with coarse image level labels. Deep learning models for WSOL rely typically on convolutional attention maps with no constraints ... More
Deep weakly-supervised learning methods for classification and localization in histology images: a surveySep 08 2019Using state-of-the-art deep learning models for the computer-assisted diagnosis of diseases like cancer raises several challenges related to the nature and availability of labeled histology images. In particular, cancer grading and localization in these ... More