Sorry, you need to enable JavaScript to visit this website.
Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2020

  • Modulation of homogeneous and isotropic turbulence by sub-Kolmogorov particles: Impact of particle field heterogeneity
    • Letournel Roxane
    • Laurent Frédérique
    • Massot Marc
    • Vié Aymeric
    International Journal of Multiphase Flow, Elsevier, 2020, 125, pp.103233. The modulation of turbulence by sub-Kolmogorov particles has been thoroughly characterized in the literature, showing either enhancement or reduction of kinetic energy at small or large scale depending on the Stokes number and the mass loading. However , the impact of a third parameter, the number density of particles, has not been independently investigated. In the present work, we perform direct numerical simulations of decaying Homogeneous Isotropic Turbulence loaded with monodisperse sub-Kolmogorov particles, varying independently the Stokes number, the mass loading and the number density of particles. Like previous investigators, crossover and modulations of the fluid energy spectra are observed consistently with the change in Stokes number and mass loading. Additionally, DNS results show a clear impact of the particle number density, promoting the energy at small scales while reducing the energy at large scales. For high particle number density, the turbulence statistics and spectra become insensitive to the increase of this parameter, presenting a two-way asymptotic behavior. Our investigation identifies the energy transfer mechanisms, and highlights the differences between the influence of a highly concentrated disperse phase (high particle number density, limit behavior) and that of heterogeneous concentration fields (low particle number density). In particular, a measure of this heterogeneity is proposed and discussed which allows to identify specific regimes in the evolution of turbulence statistics and spectra. (10.1016/j.ijmultiphaseflow.2020.103233)
    DOI : 10.1016/j.ijmultiphaseflow.2020.103233
  • Modèles Génératifs Profonds : l'échantillonnage en haute dimension revisité
    • Moulines Eric
    , 2020. Les modèles génératifs (GM) permettent d’inférer des modèles de loi pour des observations structurées de grande dimension, qui sont typiques de l'IA moderne. Les modèles génératifs peuvent également être utilisés pour échantillonner de nouveaux exemples, en reliant le problème d'inférence à l'échantillonnage.L'apprentissage de modèles génératifs profonds (MGD) capables de capturer les structures de dépendance complexe de lois à partir de grands ensemble de données dans un cadre non-ou semi-supervisé apparaît aujourd'hui comme l'un des principaux défis de l'IA. Les modèles génératifs profonds ont de nombreuses applications passionnantes pour résoudre la pénurie de données en générant de " nouveaux " exemples, pour préserver la confidentialité en diffusant le modèle génératif à laplace des données mais aussi pour détecter les observations aberrantes.Dans cette présentation, je vais couvrir trois directions de recherche sur lesquelles je travaille actuellement.Une première approche est basée sur la minimisation de l'entropie croisée (divergence de Kullback-Leibler) entre la distribution des observations et un modèle paramétré soit par des réseaux de neurones profonds, soit par des fonctions d’energies plus adaptées, reliant les modèles génératifs et les « energy based models » quiont été introduits pour l’apprentissage non-supervisé (mais dans un cadre non-probabiliste). Cette approche est séduisante mais elle pose des problèmes de calcul difficiles, liés à la nécessité d'estimer la constante de normalisation et son gradient.Une deuxième approche repose sur les méthodes d'entropie maximale. Cette approche trouve son origine dans les quantités de physique statistique pour apprendre une distribution maximisant l'entropie sous contrainte de moment, qui sont construites à partir d'unereprésentation issue d’un réseau de neurones profondsUne troisième approche consiste à utiliser des auto-encodeurs variationnels (Variational Autoencoder, VAE), un cas particulier d'inférence variationnelle. Les VAE apprennent conjointement un algorithme pour générer des échantillons à partir de la distribution ainsi qu'un espace latent qui résume la distribution des observations.J’illustrerai ces approches par des exemples et je discuterai des challenges théoriques et numériques que ces approches posent. <a href="https://videos-rennes.inria.fr/video/HJt6vEaXI" target="_blank">[Vidéo en ligne]</a>
  • Comparative study of harmonic and Rayleigh-Ritz procedures with applications to deflated conjugate gradients
    • Venkovic Nicolas
    • Mycek Paul
    • Giraud Luc
    • Le Maitre Olivier
    , 2020. Harmonic Rayleigh-Ritz and Raleigh-Ritz projection techniques are compared in the context of iterative procedures to solve for small numbers of least dominant eigenvectors of large symmetric positive definite matrices. The procedures considered are (i) locally optimal conjugate gradient (CG) methods, i.e., LOBCG, (ii) thick-restart Lanczos methods, and (iii) recycled linear CG solvers, e.g., eigCG. Approaches based on principles of local optimality are adapted to enable the use of harmonic projection techniques. Upon investigating the search spaces generated by these methods, it is found that LOBCG and thick-restart Lanczos methods can be adapted, which is not the case of eigCG. Explanations are also given as to why eigCG works so well in comparison to other recycling strategies. Numerical experiments show that, while approaches based on harmonic projections consistently result in a faster convergence of eigen-residuals, they generally do not yield better convergence of the forward error of eigenvectors, until the Rayleigh quotients have converged. Then, the effect of recycling strategies is investigated on deflation for the resolution of sequences of linear systems. While non-locally optimal recycling strategies need to solve more linear systems in order to fully develop their effect on convergence, they eventually reach similar behaviors to those of locally optimal recycling procedures. While implementations based on Init-CG are robust for systems with multiple right-hand sides, this is not the case for multiple operators.
  • Quantitative modeling links in vivo microstructural and macrofunctional organization of human and macaque insular cortex, and predicts cognitive control abilities
    • Menon Vinod
    • Gallardo Guillermo
    • Pinsk Mark
    • Nguyen Van-Dang
    • Li Jing-Rebecca
    • Cai Weidong
    • Wassermann Demian
    , 2020. (10.1101/662601)
    DOI : 10.1101/662601
  • Computing bi-tangents for transmission belts
    • Chouly Franz
    • Loubani Jinan
    • Lozinski Alexei
    • Méjri Bochra
    • Merito Kamil
    • Passos Sébastien
    • Pineda Angie
    , 2020. In this note, we determine the bi-tangents of two rotated ellipses, and we compute the coordinates of their points of tangency. For these purposes, we develop two approaches. The first one is an analytical approach in which we compute analytically the equations of the bi-tangents. This approach is valid only for some cases. The second one is geometrical and is based on the determination of the normal vector to the tangent line. This approach turns out to be more robust than the first one and is valid for any configuration of ellipses.
  • Adaptive Bayesian SLOPE—High-dimensional Model Selection with Missing Values
    • Jiang Wei
    • Bogdan Malgorzata
    • Josse Julie
    • Miasojedow Blazej
    • Rockova Veronika
    , 2020. We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates. To address this relatively understudied problem, we propose a new synergistic procedure -- adaptive Bayesian SLOPE -- which effectively combines the SLOPE method (sorted l1 regularization) together with the Spike-and-Slab LASSO method. We position our approach within a Bayesian framework which allows for simultaneous variable selection and parameter estimation, despite the missing values. As with the Spike-and-Slab LASSO, the coefficients are regarded as arising from a hierarchical model consisting of two groups: (1) the spike for the inactive and (2) the slab for the active. However, instead of assigning independent spike priors for each covariate, here we deploy a joint "SLOPE" spike prior which takes into account the ordering of coefficient magnitudes in order to control for false discoveries. Through extensive simulations, we demonstrate satisfactory performance in terms of power, FDR and estimation bias under a wide range of scenarios. Finally, we analyze a real dataset consisting of patients from Paris hospitals who underwent a severe trauma, where we show excellent performance in predicting platelet levels. Our methodology has been implemented in C++ and wrapped into an R package ABSLOPE for public use.
  • Nonparametric imputation by data depth
    • Mozharovskyi Pavlo
    • Josse Julie
    • Husson François
    Journal of the American Statistical Association, Taylor & Francis, 2020, 115 (529), pp.241-253. The presented methodology for single imputation of missing values borrows the idea from data depth --- a measure of centrality defined for an arbitrary point of the space with respect to a probability distribution or a data cloud. This consists in iterative maximization of the depth of each observation with missing values, and can be employed with any properly defined statistical depth function. On each single iteration, imputation is narrowed down to optimization of quadratic, linear, or quasiconcave function being solved analytically, by linear programming, or the Nelder-Mead method, respectively. Being able to grasp the underlying data topology, the procedure is distribution free, allows to impute close to the data, preserves prediction possibilities different to local imputation methods (k-nearest neighbors, random forest), and has attractive robustness and asymptotic properties under elliptical symmetry. It is shown that its particular case --- when using Mahalanobis depth --- has direct connection to well known treatments for multivariate normal model, such as iterated regression or regularized PCA. The methodology is extended to the multiple imputation for data stemming from an elliptically symmetric distribution. Simulation and real data studies positively contrast the procedure with existing popular alternatives. The method has been implemented as an R-package. (10.1080/01621459.2018.1543123)
    DOI : 10.1080/01621459.2018.1543123
  • Maximization of the Steklov Eigenvalues with a Diameter Constraint
    • Al Sayed Abdelkader
    • Bogosel Beniamin
    • Henrot Antoine
    • Nacry Florent
    SIAM Journal on Mathematical Analysis, Society for Industrial and Applied Mathematics, 2020, 53 (1), pp.710-729. In this paper, we address the problem of maximizing the Steklov eigenvalues with a diameter constraint. We provide an estimate of the Steklov eigenvalues for a convex domain in terms of its diameter and volume and we show the existence of an optimal convex domain. We establish that balls are never maximizers, even for the first non-trivial eigenvalue that contrasts with the case of volume or perimeter constraints. Under an additional regularity assumption, we are able to prove that the Steklov eigenvalue is multiple for the optimal domain. We illustrate our theoretical results by giving some optimal domains in the plane thanks to a numerical algorithm. (10.1137/20M1335042)
    DOI : 10.1137/20M1335042
  • Second order linear differential equations with analytic uncertainties: stochastic analysis via the computation of the probability density function
    • Jornet Marc
    • Calatayud Julia
    • Le Maitre Olivier
    • Cortés Juan Carlos
    Journal of Computational and Applied Mathematics, Elsevier, 2020, 374, pp.112770. This paper concerns the analysis of random second order linear differential equations. Usually, solving these equations consists of computing the first statistics of the response process, and that task has been an essential goal in the literature. A more ambitious objective is the computation of the solution probability density function. We present advances on these two aspects in the case of general random non-autonomous second order linear differential equations with analytic data processes. The Fröbenius method is employed to obtain the stochastic solution in the form of a mean square convergent power series. We demonstrate that the convergence requires the boundedness of the random input coefficients. Further, the mean square error of the Fröbenius method is proved to decrease exponentially with the number of terms in the series, although not uniformly in time. Regarding the probability density function of the solution at a given time, which is the focus of the paper, we rely on the law of total probability to express it in closed-form as an expectation. For the computation of this expectation, a sequence of approximating density functions is constructed by reducing the dimensionality of the problem using the truncated power series of the fundamental set. We prove several theoretical results regarding the pointwise convergence of the sequence of density functions and the convergence in total variation. The pointwise convergence turns out to be exponential under a Lipschitz hypothesis. As the density functions are expressed in terms of expectations, we propose a crude Monte Carlo sampling algorithm for their estimation. This algorithm is implemented and applied on several numerical examples designed to illustrate the theoretical findings of the paper. After that, the efficiency of the algorithm is improved by employing the control variates method. Numerical examples corroborate the variance reduction of the Monte Carlo approach. (10.1016/j.cam.2020.112770)
    DOI : 10.1016/j.cam.2020.112770
  • Optimality conditions in variational form for non-linear constrained stochastic control problems
    • Pfeiffer Laurent
    Mathematical Control and Related Fields, AIMS, 2020, 10 (3), pp.493-526. Optimality conditions in the form of a variational inequality are proved for a class of constrained optimal control problems of stochastic differential equations. The cost function and the inequality constraints are functions of the probability distribution of the state variable at the final time. The analysis uses in an essential manner a convexity property of the set of reachable probability distributions. An augmented Lagrangian method based on the obtained optimality conditions is proposed and analyzed for solving iteratively the problem. At each iteration of the method, a standard stochastic optimal control problem is solved by dynamic programming. Two academical examples are investigated. (10.3934/mcrf.2020008)
    DOI : 10.3934/mcrf.2020008
  • High to Low pellet cladding gap heat transfer modeling methodology in an uncertainty quantification framework for a PWR Rod Ejection Accident with best estimate coupling
    • Delipei Gregory Kyriakos
    • Garnier Josselin
    • Le Pallec Jean-Charles
    • Normand Benoit
    EPJ N - Nuclear Sciences & Technologies, EDP Sciences, 2020, 6, pp.56. High to Low modeling approaches can alleviate the computationally expensive fuel modeling in nuclear reactor’s transient uncertainty quantification. This is especially the case for Rod Ejection Accident (REA) in Pressurized Water Reactors (PWR) were strong multi-physics interactions occur. In this work, we develop and propose a pellet cladding gap heat transfer (Hgap) High to Low modeling methodology for a PWR REA in an uncertainty quantification framework. The methodology involves the calibration of asimplified $Hgap$ model based on high fidelity simulations with the fuel-thermomechanics code ALCYONE1.The calibrated model is then introduced into the CEA developed CORPUS Best Estimate (BE) multi-physicscoupling between APOLLO3R© and FLICA4. This creates an Improved Best Estimate (IBE) coupling that is then used for an uncertainty quantification study. The results indicate that with IBE the distance to boiling crisis uncertainty is decreased from 57% to 42%. This is reflected to the decrease of the sensitivity of $Hgap$. In the BE coupling $Hgap$ was responsible for 50% of the output variance while in IBE it is close to 0. These results show the potential gain of High to Low approachez for $Hgap$ modeling in REA uncertainty analyses. (10.1051/epjn/2020018)
    DOI : 10.1051/epjn/2020018
  • SPIX: a new software package to reveal chemical reactions at trace amounts in very complex mixtures from high-resolution mass spectra data sets
    • Nicol Edith
    • Xu Yao
    • Varga Zsuzsanna
    • Kinani Said
    • Bouchonnet Stéphane
    • Lavielle Marc
    Rapid Communications in Mass Spectrometry, Wiley, 2020. Rationale: High-resolution mass spectrometry-based non-targeted screening has a huge potential for applications in environmental sciences, engineering and regulation. However, it produces big data for which full appropriate processing is a real challenge; the development of processing software is the last building-block to enable large-scale use of this approach. Methods: A new software application, SPIX, has been developed to extract relevant information from high-resolution mass-spectrum datasets. Dealing with intrinsic sample variability and reducing operator subjectivity, it opens up opportunities and promising prospects in many areas of analytical chemistry. SPIX is freely available at: http://spix.webpopix.org. Results: Two features of the software are presented in the field of environmental analysis. An example illustrates how SPIX reveals photodegradation reactions in wastewater by fitting kinetic models to significant changes in ion abundance over time. A second example shows the ability of SPIX to detect photoproducts at trace amounts in river water, through comparison of datasets from samples taken before and after irradiation. Conclusions: SPIX has shown its ability to reveal relevant modifications between two series of big data sets, allowing for instance to study the consequences of a given event on a complex substrate. Most of alland this is to our knowledge the only software currently available allowing thatit can reveal and monitor any kind of reaction in all types of mixture.
  • Validation strategy of reduced-order two-fluid flow models based on a hierarchy of direct numerical simulations
    • Cordesse Pierre
    • Remigi Alberto
    • Duret Benjamin
    • Murrone Angelo
    • Ménard Thibaut
    • Demoulin François-Xavier
    • Massot Marc
    Flow, Turbulence and Combustion, Springer Verlag, 2020, 105 (4), pp.1381-1411. Whereas direct numerical simulation (DNS) have reached a high level of description in the field of atomization processes, they are not yet able to cope with industrial needs since they lack resolution and are too costly. Predictive simulations relying on reduced order modeling have become mandatory for applications ranging from cryotechnic to aeronautic combustion chamber liquid injection. Two-fluid models provide a good basis in order to conduct such simulations, even if recent advances allow to refine subscale modeling using geometrical variables in order to reach a unified model including separate phases and disperse phase descriptions based on high order moment methods. The simulation of such models has to rely on dedicated numerical methods and still lacks assessment of its predictive capabilities. The present paper constitutes a building block of the investigation of a hierarchy of test-cases designed to be amenable to DNS while close enough to industrial configurations, for which we propose a comparison of two-fluid compressible simulations with DNS data-bases. We focus in the present contribution on an air-assisted water atomization using a planar liquid sheet injector. Qualitative and quantitative comparisons with incompressible DNS allow us to identify and analyze strength and weaknesses of the reduced-order modeling and numerical approach in this specific configuration and set a framework for more refined models since they already provide a very interesting level of comparison on averaged quantities. (10.1007/s10494-020-00154-w)
    DOI : 10.1007/s10494-020-00154-w
  • Optimal Hedging Under Fast-Varying Stochastic Volatility
    • Garnier Josselin
    • Sølna Knut
    SIAM Journal on Financial Mathematics, Society for Industrial and Applied Mathematics, 2020, 11 (1), pp.274-325. In a market with a rough or Markovian mean-reverting stochastic volatility thereis no perfect hedge. Here it is shown how various delta-type hedging strategies perform and canbe evaluated in such markets in the case of European options.A precise characterization of thehedging cost, the replication cost caused by the volatilityfluctuations, is presented in an asymptoticregime of rapid mean reversion for the volatility fluctuations. The optimal dynamic asset basedhedging strategy in the considered regime is identified as the so-called “practitioners” delta hedgingscheme. It is moreover shown that the performances of the delta-type hedging schemes are essentiallyindependent of the regularity of the volatility paths in theconsidered regime and that the hedgingcosts are related to a Vega risk martingale whose magnitude is proportional to a new market riskparameter. It is also shown via numerical simulations that the proposed hedging schemes whichderive from option price approximations in the regime of rapid mean reversion, are robust: the“practitioners” delta hedging scheme that is identified as being optimal by our asymptotic analysiswhen the mean reversion time is small seems to be optimal witharbitrary mean reversion times. (10.1137/18M1221655)
    DOI : 10.1137/18M1221655
  • Regression Monte Carlo methods for HJB-type equations: which approximation space?
    • Barrera David
    • Gobet Emmanuel
    • Lopez-Salas Jose
    • Turkedjiev Plamen
    • Vasquez Carlos
    • Zubelli Jorge
    , 2020.
  • Tropical planar networks
    • Gaubert Stéphane
    • Niv Adi
    Linear Algebra and its Applications, Elsevier, 2020, 595, pp.123-144. We show that every tropical totally positive matrix can be uniquely represented as the transfer matrix of a canonical totally connected weighted planar network. We deduce a uniqueness theorem for the factorization of a tropical totally positive in terms of elementary Jacobi matrices. (10.1016/j.laa.2020.02.019)
    DOI : 10.1016/j.laa.2020.02.019
  • A quantitative McDiarmid’s inequality for geometrically ergodic Markov chains
    • Havet Antoine
    • Lerasle Matthieu
    • Moulines Éric
    • Vernet Elodie
    Electronic Communications in Probability, Institute of Mathematical Statistics (IMS), 2020, 25. (10.1214/20-ECP286)
    DOI : 10.1214/20-ECP286
  • Commentaires sur le rapport de surveillance de culture du MON 810 en 2018. Paris, le 25 février 2020
    • Du Haut Conseil Des Biotechnologies Comité Scientifique
    • Angevin Frédérique
    • Bagnis Claude
    • Bar-Hen Avner
    • Barny Marie-Anne
    • Boireau Pascal
    • Brévault Thierry
    • Chauvel Bruno B.
    • Collonnier Cécile
    • Couvet Denis
    • Dassa Elie
    • Demeneix Barbara
    • Franche Claudine
    • Guerche Philippe
    • Guillemain Joël
    • Hernandez Raquet Guillermina
    • Khalife Jamal
    • Klonjkowski Bernard
    • Lavielle Marc
    • Le Corre Valérie
    • Lefèvre François
    • Lemaire Olivier
    • Lereclus Didier D.
    • Maximilien Rémy
    • Meurs Eliane
    • Naffakh Nadia
    • Négre Didier
    • Noyer Jean-Louis
    • Ochatt Sergio
    • Pages Jean-Christophe
    • Raynaud Xavier
    • Regnault-Roger Catherine
    • Renard Michel M.
    • Renault Tristan
    • Saindrenan Patrick
    • Simonet Pascal
    • Troadec Marie-Bérengère
    • Vaissière Bernard
    • de Verneuil Hubert
    • Vilotte Jean-Luc
    , 2020, pp.35 p.. Les analyses contenues dans le rapport de surveillance de Bayer Agriculture BVBA ne font apparaître aucun problème majeur associé à la culture de maïs MON 810 en 2018. Toutefois, le CS du HCB identifie encore certaines faiblesses et limites méthodologiques concernant la surveillance de la sensibilité des ravageurs ciblés à la toxine Cry1Ab, remettant en question les conclusions du rapport. Le HCB estime notamment que l’utilisation d’une dose diagnostic présente certaines limites pour la détection précoce de l’évolution de la résistance, tant dans son principe intrinsèque que dans sa mise en oeuvre par Bayer, et recommande une méthode alternative de type F2 screen permettant de déterminer la fréquence des allèles de résistance au sein d’une population de ravageurs cibles. Par ailleurs, le HCB formule des recommandations destinées à renforcer la mise en oeuvre des zones refuges pour prévenir ou retarder le développement de résistance à la toxine Cry1Ab chez les ravageurs ciblés. Concernant la surveillance générale, le CS du HCB relève un problème de pertinence méthodologique quant aux questions étudiées, avec des règles de décision arbitraires, des conclusions incorrectement justifiées et un possible biais associé au format d’enquête auprès du panel d’agriculteurs qui ont accepté de répondre au questionnaire. Enfin, le CS du HCB recommande que le rapport de surveillance considère la présence de téosinte dans des zones de culture du maïs MON 810 en Espagne et les risques potentiels associés à une éventuelle introgression de gènes de maïs MON 810 chez le téosinte.
  • Kinetic derivation of diffuse-interface fluid models
    • Giovangigli Vincent
    Physical Review E, American Physical Society (APS), 2020, 102. We present a full derivation of capillary fluid equations from the kinetic theory of dense gases. These equations involve van der Waals' gradient energy, Korteweg's tensor, and Dunn and Serrin's heat flux as well as viscous and heat dissipative fluxes. Starting from macroscopic equations obtained from the kinetic theory of dense gases, we use a second-order expansion of the pair distribution function in order to derive the diffuse interface model. The capillary extra terms and the capillarity coefficient are then associated with intermolecular forces and the pair interaction potential. (10.1103/physreve.102.012110)
    DOI : 10.1103/physreve.102.012110
  • The mean field Schrödinger problem: ergodic behavior, entropy estimates and functional inequalities
    • Backhoff Julio
    • Conforti Giovanni
    • Gentil Ivan
    • Léonard Christian
    Probability Theory and Related Fields, Springer Verlag, 2020, 178, pp.475-530. (10.1007/s00440-020-00977-8)
    DOI : 10.1007/s00440-020-00977-8
  • Multimode communication through the turbulent atmosphere
    • Borcea Liliana
    • Garnier Josselin
    • Sølna Knut
    Journal of the Optical Society of America. A Optics, Image Science, and Vision, Optical Society of America, 2020, 37 (5), pp.720. A central question in free-space optical communications is how to improve the transfer of information between a transmitter and a receiver. The capacity of the communication channel can be increased by multiplexing of independent modes using either: (1) the multiple-input–multiple-output (MIMO) approach, where communication is done with modes obtained from the singular value decomposition of the transfer matrix from the transmitter array to the receiver array, or (2) the orbital angular momentum (OAM) approach, which uses vortex beams that carry angular momenta. In both cases, the number of usable modes is limited by the finite aperture of the transmitter and receiver, and the effect of the turbulent atmosphere. The goal of this paper is twofold: first, we show that the MIMO and OAM multiplexing schemes are closely related. Specifically, in the case of circular apertures, the leading singular vectors of the transfer matrix, which are useful for communication, are essentially the same as the commonly used Laguerre–Gauss vortex beams, provided these have a special radius that depends on the wavelength, the distance from the transmitter to the receiver, and the ratio of the radii of their apertures. Second, we characterize the effect of atmospheric turbulence on the communication modes using the phase screen method put in the mathematical framework of beam propagation in random media. (10.1364/JOSAA.384007)
    DOI : 10.1364/JOSAA.384007
  • A new McKean-Vlasov stochastic interpretation of the parabolic-parabolic Keller-Segel model: The one-dimensional case
    • Tomasevic Milica
    • Talay Denis
    Bernoulli, Bernoulli Society for Mathematical Statistics and Probability, 2020, 26 (2), pp.1323-1353. In this paper we analyze a stochastic interpretation of the one-dimensional parabolic-parabolic Keller-Segel system without cut-off. It involves an original type of McKean-Vlasov interaction kernel. At the particle level, each particle interacts with all the past of each other particle by means of a time integrated functional involving a singular kernel. At the mean-field level studied here, the McKean-Vlasov limit process interacts with all the past time marginals of its probability distribution in a similarly singular way. We prove that the parabolic-parabolic Keller-Segel system in the whole Euclidean space and the corresponding McKean-Vlasov stochastic differential equation are well-posed for any values of the parameters of the model. (10.3150/19-BEJ1158)
    DOI : 10.3150/19-BEJ1158
  • Variance reduction for Markov chains with application to MCMC
    • Belomestny D
    • Iosipoi L
    • Moulines E
    • Naumov A
    • Samsonov S
    Statistics and Computing, Springer Verlag (Germany), 2020. In this paper we propose a novel variance reduction approach for additive functionals of Markov chains based on minimization of an estimate for the asymptotic variance of these functionals over suitable classes of control variates. A distinctive feature of the proposed approach is its ability to significantly reduce the overall finite sample variance. This feature is theoretically demonstrated by means of a deep non asymptotic analysis of a variance reduced functional as well as by a thorough simulation study. In particular we apply our method to various MCMC Bayesian estimation problems where it favourably compares to the existing variance reduction approaches.
  • On the Turnpike Property and the Receding-Horizon Method for Linear-Quadratic Optimal Control Problems
    • Breiten Tobias
    • Pfeiffer Laurent
    SIAM Journal on Control and Optimization, Society for Industrial and Applied Mathematics, 2020, 58 (2), pp.26. Optimal control problems with a very large time horizon can be tackled with the Receding Horizon Control (RHC) method, which consists in solving a sequence of optimal control problems with small prediction horizon. The main result of this article is the proof of the exponential convergence (with respect to the prediction horizon) of the control generated by the RHC method towards the exact solution of the problem. The result is established for a class of infinite-dimensional linear-quadratic optimal control problems with time-independent dynamics and integral cost. Such problems satisfy the turnpike property: the optimal trajectory remains most of the time very close to the solution to the associated static optimization problem. Specific terminal cost functions, derived from the Lagrange multiplier associated with the static optimization problem, are employed in the implementation of the RHC method. (10.1137/18M1225811)
    DOI : 10.1137/18M1225811
  • Accuracy assessment of the Non-Ideal Computational Fluid Dynamics model for siloxane MDM from the open-source SU2 suite
    • Gori Giulio
    • Zocca Marta
    • Cammi Giorgia
    • Spinelli Andrea
    • Congedo Pietro Marco
    • Guardone Alberto
    European Journal of Mechanics - B/Fluids, Elsevier, 2020, 79, pp.109-120. The first-ever accuracy assessment of a computational model for Non-Ideal Compressible-Fluid Dynamics (NICFD) flows is presented. The assessment relies on a comparison between numerical predictions, from the open-source suite SU2, and pressure and Mach number measurements of compressible fluid flows in the non-ideal regime. Namely, measurements regard supersonic flows of siloxane MDM (Octamethyltrisiloxane, C 8 H 24 O 2 Si 3) vapor expanding along isentropes in the close proximity of the liquid-vapor saturation curve. The model accuracy assessment takes advantage of an Uncertainty Quantification (UQ) analysis, to compute the variability of the numerical solution with respect the uncertainties affecting the test-rig operating conditions. This allows for an uncertainty-based assessment of the accuracy of numerical predictions. The test set is representative of typical operating conditions of Organic Rankine Cycle systems and it includes compressible flows expanding through a converging-diverging nozzle in mildly-to-highly non-ideal conditions. All the considered flows are well represented by the computational model. Therefore, the reliability of the numerical implementation and the predictiveness of the NICFD model are confirmed. (10.1016/j.euromechflu.2019.08.014)
    DOI : 10.1016/j.euromechflu.2019.08.014