Share

Publications

Publications

CMAP Theses  are available by following this link:
Discover CMAP theses

Listed below, are sorted by year, the publications appearing in the HAL open archive.

2019

  • Regulation of renewable resource exploitation
    • Kharroubi Idris
    • Lim Thomas
    • Mastrolia Thibaut
    SIAM Journal on Control and Optimization, Society for Industrial and Applied Mathematics, 2019, 58 (1), pp.551-579. We investigate the impact of a regulation policy imposed on an agent exploiting a possibly renewable natural resource. We adopt a principal-agent model in which the Principal looks for a contract, i.e., taxes/compensations, leading the Agent to a certain level of exploitation. For a given contract, we first describe the Agent's optimal harvest using the backward SDE theory. Under regularity and boundedness assumptions on the coefficients, we express almost optimal contracts as solutions to HJB equations. We then extend the result to coefficients with less regularity and logistic dynamics for the natural resource. We end by numerical examples to illustrate the impact of the regulation in our model. (10.1137/19M1265740)
    DOI : 10.1137/19M1265740
  • A geometric based preprocessing for weighted ray transforms with applications in SPECT
    • Goncharov Fedor
    , 2019. In this work we investigate numerically the reconstruction approach proposed in Goncharov, Novikov, 2016, for weighted ray transforms (weighted Radon transforms along oriented straight lines) in 3D. In particular, the approach is based on a geometric reduction of the data modeled by weighted ray transforms to new data modeled by weighted Radon transforms along two-dimensional planes in 3D. Such reduction could be seen as a preprocessing procedure which could be further completed by any preferred reconstruction algorithm. In a series of numerical tests on modelized and real SPECT (single photon emission computed tomography) data we demonstrate that such procedure can significantly reduce the impact of noise on reconstructions.
  • Mathematical modeling and simulation of non-equilibrium plasmas : application to magnetic reconnection in the Sun atmosphere
    • Wargnier Quentin
    , 2019. The ability to model, simulate and predict magnetic reconnection (MR) is a stumbling block in order to predict space weather and geomagnetic storms, which can lead to great perturbation of satellites. Some fundamental aspects of MR are not yet well understood. The scientific issue at stake is the proper description of the unsteady energy transfer from magnetic energy to kinetic and thermal energy, which is still out of reach for the standard Magneto-hydrodynamics (MHD) models. The first objective of the present project is to develop a coherent fluid model for magnetized plasmas out of thermal and chemical equilibrium with a detailed description of the dissipative effects based on kinetic theory of gases, which thus inherits a proper mathematical structure. The second goal is the development of a new numerical strategy, with high accuracy and robustness, based on a massively parallel code with adaptive mesh refinement able to cope with the full spectrum of scales of the model and related stiffness. The whole set of transport coefficients, thermodynamics relations and chemical rates in this magnetized two-temperature setting will be studied and compared to the one in the literature used in the field. Then, we will show that the model and related numerical strategy, obtained from this transdisciplinary work involving engineering, plasma physics, solar physics, mathematics, scientific computing and HPC, is able to properly reproduce the physics of MR. The validation of the approach through a series of test-cases relevant for the application to the dynamics of solar atmosphere in connection with VKI and NASA will provide a tool, open to the community, capable of resolving several critical scientific and technological issues.
  • Numerical Simulation of a Battery Thermal Management System Under Uncertainty for a Racing Electric Car
    • Solai Elie
    • Beaugendre Heloise
    • Congedo Pietro Marco
    • Daccord Rémi
    • Guadagnini Maxime
    , 2019.
  • Scaling limit for solutions of BSDEs driven by population dynamics
    • Jusselin Paul
    • Mastrolia Thibaut
    , 2019. Going from a scaling approach for birth/death processes, we investigate the convergence of solutions to BSDEs driven a sequence of converging martingales. We apply our results to non-Markovian stochastic control problems for discrete population models. In particular we describe how the values and optimal controls of control problems converge when the models converge towards a continuous population model.
  • Statistical modeling of the gas-liquid interface using geometrical variables: toward a unified description of the disperse and separated phase flows
    • Essadki Mohamed
    • Drui Florence
    • de Chaisemartin Stéphane
    • Larat Adam
    • Ménard Thibault
    • Massot Marc
    International Journal of Multiphase Flow, Elsevier, 2019. In this work, we investigate an original strategy in order to derive a statistical modeling of the interface in gas-liquid two-phase flows through geometrical variables. The contribution is twofold. First it participates in the theoretical design of a unified reduced-order model for the description of two regimes: a disperse phase in a carrier fluid and two separated phases. The first idea is to propose a statistical description of the interface relying on geometrical properties such as the mean and Gauss curvatures and define a Surface Density Function (SDF). The second main idea consists in using such a formalism in the disperse case, where a clear link is proposed between local statistics of the interface and the statistics on objects, such as the number density function in Williams-Boltzmann equation for droplets. This makes essential the use of topolog-ical invariants in geometry through the Gauss-Bonnet formula and allows to include the works conducted on sprays of spherical droplets. It yields a statistical treatment of populations of non-spherical objects such as ligaments, as long as they are home-omorphic to a sphere. Second, it provides an original angle and algorithm in order to build statistics from DNS data of interfacial flows. From the theoretical approach, we identify a kernel for the spatial averaging of geometrical quantities preserving the topological invariants. Coupled to a new algorithm for the evaluation of curvatures and surface that preserves these invariants, we analyze two sets of DNS results conducted with the ARCHER code from CORIA with and without topological changes and assess the approach. (10.1016/j.ijmultiphaseflow.2019.103084)
    DOI : 10.1016/j.ijmultiphaseflow.2019.103084
  • High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm
    • Durmus Alain
    • Moulines Éric
    Bernoulli, Bernoulli Society for Mathematical Statistics and Probability, 2019, 25 (4A). We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density with respect to the Lebesgue measure on $\mathbb{R}^d$, known up to a normalization constant $x \mapsto \pi(x)= \mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y$. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that $U$ is continuously differentiable, $\nabla U$ is globally Lipschitz and $U$ is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order $2$ and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of these bounds is explicit. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are measurable and bounded. An illustration to Bayesian inference for binary regression is presented to support our claims. (10.3150/18-BEJ1073)
    DOI : 10.3150/18-BEJ1073
  • On time scales and quasi-stationary distributions for multitype birth-and-death processes
    • Chazottes J.-R
    • Collet P.
    • Méléard S
    Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, Institut Henri Poincaré (IHP), 2019. We consider a class of birth-and-death processes describing a population made of $d$ sub-populations of different types which interact with one another. The state space is $\mathbb{Z}^d_+$ (unbounded). We assume that the population goes almost surely to extinction, so that the unique stationary distribution is the Dirac measure at the origin. These processes are parametrized by a scaling parameter $K$ which can be thought as the order of magnitude of the total size of the population at time $0$. For any fixed finite time span, it is well-known that such processes, when renormalized by $K$, are close, in the limit $K\to+\infty$, to the solutions of a certain differential equation in $\mathbb{R}^d_+$ whose vector field is determined by the birth and death rates. We consider the case where there is a unique attractive fixed point (off the boundary of the positive orthant) for the vector field (while the origin is repulsive). What is expected is that, for $K$ large, the process will stay in the vicinity of the fixed point for a very long time before being absorbed at the origin. To precisely describe this behavior, we prove the existence of a quasi-stationary distribution (qsd, for short). In fact, we establish a bound for the total variation distance between the process conditioned to non-extinction before time $t$ and the qsd. This bound is exponentially small in $t$, for $t\gg \log K$. As a by-product, we obtain an estimate for the mean time to extinction in the qsd. We also quantify how close is the law of the process (not conditioned to non-extinction) either to the Dirac measure at the origin or to the qsd, for times much larger than $\log K$ and much smaller than the mean time to extinction, which is exponentially large as a function of $K$. Let us stress that we are interested in what happens for finite $K$. We obtain results much beyond what large deviation techniques could provide.
  • SpinDoctor: A MATLAB toolbox for diffusion MRI simulation
    • Li Jing-Rebecca
    • Nguyen Van-Dang
    • Tran Try Nguyen
    • Valdman Jan
    • Trang Cong-Bang
    • Nguyen Khieu Van
    • Vu Duc Thach Son
    • Tran Hoang An
    • Tran Hoang Trong An
    • Nguyen Thi Minh Phuong
    NeuroImage, Elsevier, 2019, 202, pp.116120. The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation. Under the assumption of negligible water exchange between compartments, the time-dependent apparent diffusion coefficient can be directly computed from the solution of a diffusion equation subject to a time-dependent Neumann boundary condition.This paper describes a publicly available MATLAB toolbox called SpinDoctor that can be used 1) to solve the Bloch-Torrey partial differential equation in order to simulate the diffusion magnetic resonance imaging signal; 2) to solve a diffusion partial differential equation to obtain directly the apparent diffusion coefficient; 3) to compare the simulated apparent diffusion coefficient with a short-time approximation formula.The partial differential equations are solved by P1 finite elements combined with built-in MATLAB routines for solving ordinary differential equations. The finite element mesh generation is performed using an external package called Tetgen. SpinDoctor provides built-in options of including 1) spherical cells with a nucleus; 2) cylindricalcells with a myelin layer; 3) an extra-cellular space enclosed either a) in a box or b) in a tight wrapping around the cells; 4) deformation of canonical cells by bending and twisting; 5) permeable membranes; Built-in diffusion-encoding pulse sequences include the Pulsed Gradient Spin Echo and the Oscillating Gradient Spin Echo.We describe in detail how to use the SpinDoctor toolbox. We validate SpinDoctor simulations using reference signals computed by the Matrix Formalism method. We compare the accuracy and computational time of SpinDoctor simulations with Monte-Carlo simulations and show significant speed-up of SpinDoctor over Monte-Carlo simulations in complex geometries. We also illustrate several extensions of SpinDoctor functionalities, including the incorporation of $T2$ relaxation, the simulation of non-standard diffusion-encoding sequences, as well as the use of externally generated. (10.1016/j.neuroimage.2019.116120)
    DOI : 10.1016/j.neuroimage.2019.116120
  • On sub-Riemannian geodesic curvature in dimension three
    • Barilari Davide
    • Kohli Mathieu
    , 2019. We introduce a notion of geodesic curvature $k_{\zeta}$ for a smooth horizontal curve $\zeta$ in a three-dimensional contact sub-Riemannian manifold, measuring how much a horizontal curve is far from being a geodesic. We show that the geodesic curvature appears as the first corrective term in the Taylor expansion of the sub-Riemannian distance between two points on a unit speed horizontal curve $$ d_{SR}^2( \zeta(t),\zeta(t+\epsilon))=\epsilon^2-\frac{k_{\zeta}^2(t)}{720} \epsilon^6 +o(\epsilon^{6}). $$ The sub-Riemannian distance is not smooth on the diagonal, hence the result contains the existence of such an asymptotics. This can be seen as a higher-order differentiability property of the sub-Riemannian distance along smooth horizontal curves. It generalizes the previously known results on the Heisenberg group.
  • On the Convergence Properties of the Mini-Batch EM and MCEM Algorithms
    • Karimi Belhal
    • Lavielle Marc
    • Moulines Éric
    , 2019. The EM algorithm is one of the most popular algorithm for inference in latent data models. For large datasets, each iteration of the algorithm can be numerically involved. To alleviate this problem, (Neal and Hinton, 1998) has proposed an incremental version in which the conditional expectation of the latent data (E-step) is computed on a mini-batch of observations. In this paper, we analyse this variant and propose and analyse the Monte Carlo version of the incremental EM in which the conditional expectation is evaluated by a Markov Chain Monte Carlo (MCMC). We establish the almost-sure convergence of these algorithms, covering both the mini-batch EM and its stochastic version. Various numerical applications are introduced in this article to illustrate our findings.
  • Practical Use Cases for Progressive Visual Analytics
    • Fekete Jean-Daniel
    • Chen Qing
    • Feng Yuheng
    • Renault Jonas
    , 2019. Progressive Visual Analytics (PVA) is meant to allow visual ana-lytics application to scale to large amounts of data while remaining interactive and steerable. The visualization community might believe that building progressive systems is difficult since there is no general purpose toolkit yet to build PVA applications, but it turns out that many existing libraries and data structures can be used effectively to help building PVA applications. They are just not well known by the visual analytics community. We report here on some of these techniques and libraries we use to handle "larger than RAM" data efficiently on three applications: Cartolabe, a system for visualizing large document corpora, ParcoursVis, a system for visualizing large event sequences from the French social security, and PPCA, a progressive PCA visualization system for large amounts of time-sequences. We explain how PVA can benefit from compressed bitset to manage sets internally and perform extremely fast Boolean operations , data sketching to compute approximate results over streaming data, and use Online algorithms to perform analyzes on large data.
  • MNE: Software for Acquiring, Processing, and Visualizing MEG/EEG Data
    • Esch Lorenz
    • Dinh Christoph
    • Larson Eric
    • Engemann Denis
    • Jas Mainak
    • Khan Sheraz
    • Gramfort Alexandre
    • Hämäläinen Matti
    , 2019, pp.355-371. The methods for acquiring, processing, and visualizing magnetoencephalography (MEG) and electroencephalography (EEG) data are rapidly evolving. Advancements in hardware and software development offer new opportunities for cognitive and clinical neuroscientists but at the same time introduce new challenges as well. In recent years the MEG/EEG community has developed a variety of software tools to overcome these challenges and cater to individual research needs. As part of this endeavor, the MNE software project, which includes MNE-C, MNE-Python, MNE-CPP, and MNE-MATLAB as its subprojects, offers an efficient set of tools addressing certain common needs. Even more importantly, the MNE software family covers diverse use case scenarios. Here, we present the landscape of the MNE project and discuss how it will evolve to address the current and emerging needs of the MEG/EEG community. (10.1007/978-3-030-00087-5_59)
    DOI : 10.1007/978-3-030-00087-5_59
  • Clustering of longitudinal shape data sets using mixture of separate or branching trajectories
    • Debavelaere Vianney
    • Bône Alexandre
    • Durrleman Stanley
    • Allassonnière Stéphanie
    , 2019, pp.66-74. Several methods have been proposed recently to learn spa-tiotemporal models of shape progression from repeated observations of several subjects over time, i.e. a longitudinal data set. These methods summarize the population by a single common trajectory in a supervised manner. In this paper, we propose to extend such approaches to an unsu-pervised setting where a longitudinal data set is automatically clustered in different classes without labels. Our method learns for each cluster an average shape trajectory (or representative curve) and its variance in space and time. Representative trajectories are built as the combination of pieces of curves. This mixture model is flexible enough to handle independent trajectories for each cluster as well as fork and merge scenarios. The estimation of such non linear mixture models in high dimension is known to be difficult because of the trapping states effect that hampers the optimisation of cluster assignments during training. We address this issue by using a tempered version of the stochastic EM algorithm. Finally, we apply our algorithm on synthetic data to validate that a tempered scheme achieve better convergence. We show then how the method can be used to test different scenarios of hippocampus atrophy in ageing by using an heteregenous population of normal ageing individuals and mild cognitive impaired subjects.
  • Small-scale kinematics of two-phase flows: identifying relaxation processes in separated- and disperse-phase flow models
    • Drui Florence
    • Larat Adam
    • Kokh Samuel
    • Massot Marc
    Journal of Fluid Mechanics, Cambridge University Press (CUP), 2019, 876, pp.326-355. With the objective of modeling both separate and disperse two-phase flows, we use in this paper a methodology for deriving two-fluid models that do not assume any flow topology. This methodology is based on a variational principle and on entropy dissipation requirement. Some of the models that are such derived and studied are already known in the contexts of the description of separate-or disperse-phase flows. However, we here propose an arrangement of these models into a hierarchy based on their links through relaxation parameters. Moreover, the models are shown to be compatible with the description of a monodisperse bubbly flow and, within this frame, the relaxation parameters can be identified. This identification is finally verified and discussed through comparisons with experimental measures of sound dispersion and with dispersion relations of a reference model for bubbly media. (10.1017/jfm.2019.538)
    DOI : 10.1017/jfm.2019.538
  • Simulation adaptative des grandes échelles d'écoulements turbulents fondée sur une méthode Galerkine discontinue
    • Naddei Fabio
    , 2019. The main goal of this work is to improve the accuracy and computational efficiency of Large Eddy Simulations (LES) by means of discontinuous Galerkin (DG) methods. To this end, two main research topics have been investigated: resolution adaptation strategies and LES models for high-order methods.As regards the first topic, in the framework of DG methods the spatial resolution can be efficiently adapted by modifying either the local mesh size (h-adaptation) or the degree of the polynomial representation of the solution (p-adaptation).The automatic resolution adaptation requires the definition of an error estimation strategy to analyse the local solution quality and resolution requirements.The efficiency of several strategies derived from the literature are compared by performing p- and h-adaptive simulations. Based on this comparative study a suitable error indicator for the adaptive scale-resolving simulations is selected.Both static and dynamic p-adaptive algorithms for the simulation of unsteady flows are then developed and analysed. It is demonstrated by numerical simulations that the proposed algorithms can provide a reduction of the computational cost for the simulation of both transient and statistically steady flows.A novel error estimation strategy is then introduced. It is local, requiring only information from the element and direct neighbours, and can be computed at run-time with limited overhead. It is shown that the static p-adaptive algorithm based on this error estimator can be employed to improve the accuracy for LES of statistically steady turbulent flows.As regards the second topic, a novel framework consistent with the DG discretization is developed for the a-priori analysis of DG-LES models from DNS databases. It allows to identify the ideal energy transfer mechanism between resolved and unresolved scales.This approach is applied for the analysis of the DG Variational Multiscale (VMS) approach. It is shown that, for fine resolutions, the DG-VMS approach is able to replicate the ideal energy transfer mechanism.However, for coarse resolutions, typical of LES at high Reynolds numbers, a more accurate agreement is obtained by a mixed Smagorinsky-VMS model.
  • MULLINS-SEKERKA AS THE WASSERSTEIN FLOW OF THE PERIMETER
    • Chambolle Antonin
    • Laux Tim
    , 2019. We prove the convergence of an implicit time discretization for the Mullins-Sekerka equation proposed in [F. Otto, Arch. Rational Mech. Anal. 141 (1998) 63-103]. Our simple argument shows that the limit satisfies the equation in a distributional sense as well as an optimal energy-dissipation relation. The proof combines simple arguments from optimal transport, gradient flows & minimizing movements, and basic geometric measure theory.
  • Decision making strategy for antenatal echographic screening of foetal abnormalities using statistical learning
    • Besson Rémi
    , 2019. In this thesis, we propose a method to build a decision support tool for the diagnosis of rare diseases. We aim to minimize the number of medical tests necessary to achieve a state where the uncertainty regarding the patient's disease is less than a predetermined threshold. In doing so, we take into account the need in many medical applications, to avoid as much as possible, any misdiagnosis. To solve this optimization task, we investigate several reinforcement learning algorithm and make them operable in our high-dimensional. To do this, we break down the initial problem into several sub-problems and show that it is possible to take advantage of the intersections between these sub-tasks to accelerate the learning phase. The strategies learned are much more effective than classic greedy strategies. We also present a way to combine expert knowledge, expressed as conditional probabilities, with clinical data. This is crucial because the scarcity of data in the field of rare diseases prevents any approach based solely on clinical data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant. Finally, we show that it is possible to effectively integrate reasoning taking into account the level of granularity of the symptoms reported while remaining within the probabilistic framework developed throughout this work.
  • Combining Ensemble Kalman Filter and Multiresolution Analysis for Efficient Assimilation into Adaptive Mesh Models
    • Siripatana A
    • Giraldi L
    • Le Maitre Olivier
    • Hoteit I.
    • Knio O M
    Computational Geosciences, Springer Verlag, 2019, 23 (6), pp.1259-1276. A new approach is developed for data assimilation into adaptive mesh models with the ensemble Kalman filter (EnKF). The EnKF is combined with a wavelet-based multiresolution analysis (MRA) scheme to enable robust and efficient assimilation in the context of reduced-complexity adaptive spatial discretization. The wavelet representation of the solution enables the use of different meshes that are individually adapted to the corresponding members of the EnKF ensemble. The analysis step of the EnKF is then performed by involving coarsening, refinement, and projection operations on its ensemble members. Depending on the choice of these operations, five variants of the MRA-EnKF are introduced, and tested on the one-dimensional Burgers equation with periodic boundary condition. The numerical results suggest that, given an appropriate tolerance value for the coarsening operation, four out of the five proposed schemes significantly reduce the computational complexity of the assimilation system, with marginal accuracy loss compared to the reference, full resolution, and EnKF solution. Overall, the proposed framework offers the possibility of capitalizing on the advantages of adaptive mesh techniques, and the flexibility of choosing suitable context-oriented criteria for efficient data assimilation. (10.1007/s10596-019-09882-z)
    DOI : 10.1007/s10596-019-09882-z
  • Polynomial Chaos Level Points Method for One-Dimensional Uncertain Steep Problems
    • Sochala Pierre
    • Le Maitre Olivier
    Journal of Scientific Computing, Springer Verlag, 2019, 81 (3), pp.1987-2009. We propose an alternative approach to the direct polynomial chaos expansion in order to approximate one-dimensional uncertain field exhibiting steep fronts. The principle of our non-intrusive approach is to decompose the level points of the quantity of interest in order to avoid the spurious oscillations encountered in the direct approach. This method is more accurate and less expensive than the direct approach since the regularity of the level points with respect to the input parameters allows achieving the convergence with low-order polynomial series. The additional computational cost induced in the post-processing phase is largely offset by the use of low-level sparse grids that require a weak number of direct model evaluations in comparison with high-level sparse grids. We apply the method to subsurface flows problem with uncertain hydraulic conductivity. Infiltration test cases having different levels of complexity are presented. (10.1007/s10915-019-01069-z)
    DOI : 10.1007/s10915-019-01069-z
  • A robust expectation-maximization method for the interpretation of small-angle scattering data from dense nanoparticle samples
    • Bakry Marc
    • Bunău Oana
    • Haddar Houssem
    Journal of Applied Crystallography, International Union of Crystallography / Wiley, 2019, 52 (5), pp.926-936. The Local Monodisperse Approximation (LMA) is a two-parameter model commonly employed for the retrieval of size distributions from the small angle scattering (SAS) patterns obtained on dense nanoparticle samples (e.g. dry powders and concentrated solutions). This work features a novel implementation of the LMA model resolution for the inverse scattering problem. Our method is based on the Expectation Maximiza-tion iterative algorithm and is free from any fine tuning of model parameters. The application of our method on SAS data acquired in laboratory conditions on dense nanoparticle samples is shown to provide good results. (10.1107/S1600576719009373)
    DOI : 10.1107/S1600576719009373
  • On the time discretization of stochastic optimal control problems: the dynamic programming approach
    • Bonnans Joseph Frédéric
    • Gianatti Justina
    • Silva Francisco
    ESAIM: Control, Optimisation and Calculus of Variations, EDP Sciences, 2019. In this work we consider the time discretization of stochastic optimal control problems. Under general assumptions on the data, we prove the convergence of the value functions associated with the discrete time problems to the value function of the original problem. Moreover , we prove that any sequence of optimal solutions of discrete problems is minimizing for the continuous one. As a consequence of the Dynamic Programming Principle for the discrete problems, the minimizing sequence can be taken in discrete time feedback form. (10.1051/cocv/2018045)
    DOI : 10.1051/cocv/2018045
  • Sequential model aggregation for production forecasting
    • Deswarte Raphaël
    • Gervais Véronique
    • Stoltz Gilles
    • da Veiga Sébastien
    Computational Geosciences, Springer Verlag, 2019, 23 (5), pp.1107-1124. Production forecasting is a key step to design the future development of a reservoir. A classical way to generate such forecasts consists in simulating future production for numerical models representative of the reservoir. However, identifying such models can be very challenging as they need to be constrained to all available data. In particular, they should reproduce past production data, which requires to solve a complex non-linear inverse problem. In this paper, we thus propose to investigate the potential of machine learning algorithms to predict the future production of a reservoir based on past production data without model calibration. We focus more specifically on robust online aggregation, a deterministic approach that provides a robust framework to make forecasts on a regular basis. This method does not rely on any specific assumption or need for stochastic modeling. Forecasts are first simulated for a set of base reservoir models representing the prior uncertainty, and then combined to predict production at the next time step. The weight associated to each forecast is related to its past performance. Three different algorithms are considered for weight computations: the exponentially weighted average algorithm, ridge regression and the Lasso regression. They are applied on a synthetic reservoir case study, the Brugge case, for sequential predictions. To estimate the potential of development scenarios, production forecasts are needed on long periods of time without intermediary data acquisition. An extension of the deterministic aggregation approach is thus proposed in this paper to provide such multi-step-ahead forecasts. (10.1007/s10596-019-09872-1)
    DOI : 10.1007/s10596-019-09872-1
  • Monotone and second order consistent scheme for the two dimensional Pucci equation
    • Bonnans Joseph Frédéric
    • Bonnet Guillaume
    • Mirebeau Jean-Marie
    , 2020, 139, pp.pp 733-742. We introduce a new strategy for the design of second-order accurate discretizations of non-linear second order operators of Bellman type, which preserves degenerate ellipticity. The approach relies on Selling's formula, a tool from lattice geometry, and is applied to the Pucci equation, discretized on a two dimensional Cartesian grid. Numerical experiments illustrate the robustness and the accuracy of the method. Note: An earlier version of this document also discussed the Monge-Ampère PDE. This part has been removed in the revised version in order to comply with the page limit of the journal. (10.1007/978-3-030-55874-1_72)
    DOI : 10.1007/978-3-030-55874-1_72
  • On the notion of geodesic curvature in sub-Riemannian geometry
    • Kohli Mathieu
    , 2019. We present a notion of geodesic curvature for smooth horizontal curves in a contact sub-Riemannian manifold, measuring how far a horizontal curve is from being a geodesic. This geodesic curvature consists in two functions that both vanish along a smooth horizontal curve if and only if this curve is a geodesic. The main result of this thesis is the metric interpretation of these geodesic curvature functions. This interpretation consists in seeing the geodesic curvature functions as the first corrective coefficients in the Taylor expansion of the sub-Riemannian distance between two close points on the curve.