Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2024

  • An implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation
    • Orlando Giuseppe
    International Journal for Numerical Methods in Fluids, Wiley, 2024, 96 (12), pp.1932-1959. We propose an implicit Discontinuous Galerkin (DG) discretization for incompressible two-phase flows using an artificial compressibility formulation. The conservative level set (CLS) method is employed in combination with a reinitialization procedure to capture the moving interface. A projection method based on the L-stable TR-BDF2 method is adopted for the time discretization of the Navier-Stokes equations and of the level set method. Adaptive Mesh Refinement (AMR) is employed to enhance the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach is shown in a number of classical benchmarks. A specific analysis on the influence of different choices of the mixture viscosity is also carried out. (10.1002/fld.5328)
    DOI : 10.1002/fld.5328
  • Sharp approximation and hitting times for stochastic invasion processes
    • Bansaye Vincent
    • Erny Xavier
    • Méléard Sylvie
    Stochastic Processes and their Applications, Elsevier, 2024, 178, pp.104458. We are interested in the invasion phase for stochastic processes with interactions when a single mutant with positive fitness arrives in a resident population at equilibrium. By a now classic approach, the first stage of the invasion is well approximated by a branching process. The macroscopic phase, when the mutant population is of the same order of the resident population, is described by the limiting dynamical system. We obtain sharper estimates and capture the intermediate mesoscopic phase for the invasive population. It allows us to characterize the hitting times of thresholds, which inherit a large variance from the first stages. These issues are motivated in particular by quantifying times to reach critical values for cancer population or epidemics. (10.1016/j.spa.2024.104458)
    DOI : 10.1016/j.spa.2024.104458
  • A spectral dominance approach to large random matrices: part II
    • Bertucci Charles
    • Lasry Jean-Michel
    • Lions Pierre-Louis
    Journal de Mathématiques Pures et Appliquées, Elsevier, 2024, 192, pp.103630. This paper is the second of a series devoted to the study of the dynamics of the spectrum of large random matrices. We study general extensions of the partial differential equation arising to characterize the limit spectral measure of the Dyson Brownian motion. We provide a regularizing result for those generalizations. We also show that several results of part I extend to cases in which there is no spectral dominance property. We then provide several modeling extensions of such models as well as several identities for the Dyson Brownian motion. (10.1016/j.matpur.2024.103630)
    DOI : 10.1016/j.matpur.2024.103630
  • Corporate probability of default under an energy transition scenario with business model adaptation to the transition
    • Ndiaye Elisa
    • Bezat Antoine
    • Gobet Emmanuel
    • Guivarch Céline
    • Jiao Ying
    , 2024. <div><p>The energy transition generates for the financial system the so-called 'transition risks', leading to the development of Climate Stress-Tests. We propose a firm-level corporate credit risk model that accounts for business model evolution in a transition scenario for Climate Stress-Tests. It is a structural and path-dependent model with stochastic total assets and debt and it integrates all the transition risks drivers as well as physical risks. Simulations show that reducing emissions intensity may improve leverage ratios despite the mitigation costs. However, often-used strategies such as aligning with sectoral averages, increase credit risk, with default probabilities up to 4 times higher in orderly transitions. Constant market share assumptions underestimate default risk for high polluters and overestimate it for low polluters. Tailored business model strategies are essential for managing credit risk effectively during the energy transition.</p></div>
  • Tropical polynomial systems and game theory
    • Béreau Antoine
    , 2024. The main objects under study in this manuscript are systems of tropical polynomial equations and inequations. Given such a system, our aim is to be able to efficiently decide its solvability, with a Nullstellensatz-type result adapted into the tropical setting.A second, more elaborate question, consists in the effective computation of the solution set of such a system.In 2018, Grigoriev and Podolskii established a tropical analogue of the effective Nullstellensatz, showing that the solvability of a system of tropical polynomial equations is equivalent to the solvability of a linearized system, obtained by truncating the Macaulay matrix up to some degree bound, and provided an upper bound of the optimal truncation degree as a function the number of variables and the number of the tropical polynomials in the system, and their degrees. We establish an improved tropical Nullstellensatz, taking into consideration the possible sparsity of the tropical polynomials in a system. We rely on a construction of Canny and Emiris from 1993, refined one year later by Sturmfels. On top of accounting for sparsity, our result closes the gap between the truncation degree obtained by Grigoriev and Podolskii and the classical Macaulay degree bound. Furthermore, we establish a more general tropical Positivstellensatz based on the very same construction, at the cost of an inflation of the truncation degree. This tropical Positivstellensatz allows one to decide the inclusion of tropical basic semialgebraic sets, thus reducing decision problems for tropical semialgebraic sets to the solution of systems of tropical linear equalities and inequalities. We combine these two results in a global hybrid Positivstellensatz.Such tropical linear systems are known to be reducible to mean payoff games, which can be solved in practice, in a scalable way, by value iteration or policy iteration methods. In particular, we propose a speedup of the classical value iteration algorithm of Zwick and Paterson, which we then use in order to decide the solvability of a system of tropical polynomial equalities and inequalities. This speedup relies on two ingredients: the use of the Krasnoselskii-Mann damping in the iteration process, as well as the introduction of a widening step, allowing for a quicker exit in case of infeasibility. This value iteration algorithm with widening was implemented in Python.We then develop a tropical analogue of eigenvalue methods in order to effectively compute the solution set of tropical polynomial systems. Relying on the connection between tropical linear systems and mean payoff games, we show that this solution set can be obtained by solving parametric mean-payoff games, arising from approriate linearizations of the tropical polynomial system using tropical Macaulay matrices. We present two approaches: a first one based on a dichotomic search, which simply allows one to certify the solvability of a tropical polynomial system, and a second, more elaborate approach, based on a tropical homotopy technique, allowing one to compute projections of the solution set onto any coordinate.Finally, we present a generalization of the Ishikawa fixed-point convergence theorem, extending it so as to tackle the case of polyhedral fixed-point free maps. This provides a theoretical framework motivating the use of the Krasnoselskii-Mann damping in the construction of our accelerated value iteration-type algorithm.
  • Gaskets of $O(2)$ loop-decorated random planar maps
    • Kammerer Emmanuel
    , 2024. We prove that for $n = 2$ the gaskets of critical rigid $O(n)$ loop-decorated random planar maps are $3/2$-stable maps. The case $n = 2$ thus corresponds to the critical case in random planar maps. The proof relies on the Wiener-Hopf factorisation for random walks. Our techniques also provide a characterisation of weight sequences of critical $O(2)$ loop-decorated maps.
  • Robust Reinforcement Learning : Theory and Practice
    • Clavier Pierre
    , 2024. Reinforcement learning (RL) is a machine learning paradigm that addresses the issue of sequential decision-making. In this paradigm, the algorithm, designated as an agent, responds to interactions with an environment. At each interaction, the agent performs an action within the environment, observes a new state of the environment, and receives a reward in consequence. The objective of the agent is to optimise an cumulative reward, which is defined by the user to align with the specific task at hand within the environment. The Markov Decision Process (MDP) theory is used in order to formalise these concepts. However, in the event of mispecifications or errors in the transition or reward function, the performance of RL may decline rapidly. To address this issue, the concept of robust MDPs has emerged, whereby the objective is to identify the optimal policy under the assumption that the transition kernel belongs to a bounded uncertainty set.This thesis presents a theoretical study of the sample complexity of robust MDPs, or the amount of data required to achieve an arbitrary small convergence error. It demonstrates that in certain cases, the sample complexity of robust MDPs can be lower than for classical MDPs, which is a promising avenue for the derivation of sample-efficient algorithms. The thesis then goes on to derive new robust RL algorithms to strengthen the performance of RL in continuous control. Our method is based on risk-averse MDPs and zero-sum games, in which the adversary can be seen as an agent that change the environment in the time. In conclusion, the final section present a benchmark for the evaluation of robust RL algorithms, which currently lack a reproducible benchmarks for performance assessment.
  • ICI Project: Individual-based epidemic propagation simulator based on a Digital Twin
    • Colomb Maxime
    • Cormier Quentin
    • Garnier Josselin
    • Graham Carl
    • Perret Julien
    • Talay Denis
    , 2024. Introduction Multi-agent simulation can be used in the field of epidemiology to evaluate sanitary policies with a custom application on targeted sub-population or location. A multi-disciplinary team of geomaticians and probabilists develops the ICI project. It generates a very detailed and realistic geographical environment and simulates the propagation of an indoor-driven epidemic in a middle sized city. Methods We create original data fusion algorithms using multiple datasources. We first model an exhausting census of activity or housing units within each buildings with spatial and functional attributes. We then generate a geolocalized and well-detailed synthetic population. Each individual is assigned to a building’s activity for every time-lapses of the simulation. A last module calculates probabilities of infections with a 9-state epidemiological model. Monte-Carlo replication are run in order to obtain robust results. Results A dedicated online application enables users to explore different characteristics of each set of input parameters (characteristics of the contamination, policies application, geographical zone…) or output indicators (sub-population, location of infection…). Spatio-temporal visual indicators are used to highlight spatial dynamic patterns. ICI simulations are run by the OpenMOLE platform for model exploration purposes (sensitivity and diversity analysis, multi-objective optimizations of sanitary policies). Conclusion The ICI platform makes it possible to elaborate and compare a wide range of precise sanitary policies onto well- detailed urban models. It raised great interest in the interventional epidemiologist community. A collaboration with public sanitary authorities starts in order to adapt the platform for the simulation of future pandemics. ICI is also part of the French National Digital Twin project.
  • Ensemble Data Assimilation for Particle-based Methods
    • Duvillard Marius
    • Giraldi Loïc
    • Le Maitre Olivier
    , 2024. This study presents a novel approach to applying data assimilation techniques for particle-based simulations using the Ensemble Kalman Filter. While data assimilation methods have been effectively applied to Eulerian simulations, their application in Lagrangian solution discretizations has not been properly explored. We introduce two specific methodologies to address this gap. The first methodology employs an intermediary Eulerian transformation that combines a projection with a remeshing process. The second is a purely Lagrangian scheme designed for situations where remeshing is not appropriate. The second is a purely Lagrangian scheme that is applicable when remeshing is not adapted. These methods are evaluated using a one-dimensional advection-diffusion model with periodic boundaries. Performance benchmarks for the one-dimensional scenario are conducted against a grid-based assimilation filter Subsequently, assimilation schemes are applied to a non-linear two-dimensional incompressible flow problem, solved via the Vortex-In-Cell method. The results demonstrate the feasibility of applying these methods in more complex scenarios, highlighting their effectiveness in both the one-dimensional and two-dimensional contexts.
  • Preconditioners based on Voronoi quantizers of random variable coefficients for stochastic elliptic partial differential equations
    • Venkovic Nicolas
    • Mycek Paul
    • Maître Olivier Le
    • Giraud Luc
    , 2024. A preconditioning strategy is proposed for the iterative solve of large numbers of linear systems with variable matrix and right-hand side which arise during the computation of solution statistics of stochastic elliptic partial differential equations with random variable coefficients sampled by Monte Carlo. Building on the assumption that a truncated Karhunen-Lo\`{e}ve expansion of a known transform of the random variable coefficient is known, we introduce a compact representation of the random coefficient in the form of a Voronoi quantizer. The number of Voronoi cells, each of which is represented by a centroidal variable coefficient, is set to the prescribed number $P$ of preconditioners. Upon sampling the random variable coefficient, the linear system assembled with a given realization of the coefficient is solved with the preconditioner whose centroidal variable coefficient is the closest to the realization. We consider different ways to define and obtain the centroidal variable coefficients, and we investigate the properties of the induced preconditioning strategies in terms of average number of solver iterations for sequential simulations, and of load balancing for parallel simulations. Another approach, which is based on deterministic grids on the system of stochastic coordinates of the truncated representation of the random variable coefficient, is proposed with a stochastic dimension which increases with the number $P$ of preconditioners. This approach allows to bypass the need for preliminary computations in order to determine the optimal stochastic dimension of the truncated approximation of the random variable coefficient for a given number of preconditioners.
  • Regime switching on the propagation speed of travelling waves of some size-structured Myxobacteria population models
    • Calvez Vincent
    • El Abdouni Adil
    • Estavoyer Maxime
    • Madrid Ignacio
    • Olivier Julien
    • Tournus Magali
    ESAIM: Proceedings and Surveys, EDP Sciences, 2024, 77, pp.195-212. The spatial propagation of complex populations can depend on some structuring variables. In particular, recent developments in microscopy have revealed the impact of bacteria heterogeneity on the population motility. Biofilms of Myxococcus xanthus bacteria have been shown to be structured in clusters of various sizes, which remarkably, tend to move faster when they consist of a larger number of bacteria. We propose a minimal reaction-diffusion discrete-size structure model of a population of Myxococcus with two possible cluster sizes: isolated and paired bacteria. Numerical experiments show that this model exhibits travelling waves whose propagation speed depends on the increased motility of clusters, and the exchange rates between isolated bacteria and clusters. Notably, we present evidence of the existence of a characteristic threshold level θ *. on the ratio between cluster motility and isolated bacteria motility, which separates two distinct regimes of propagation speed. When the ratio is less or equal than θ *, the propagation speed of the population is constant with respect to the ratio. However, when the ratio is above θ *, the propagation speed increases. We also consider a generalised model with continuous-size structure, which also shows the same behaviour. We extend the model to include interactions with a resource population, which show qualitative behaviours in agreement to the biological experiments. (10.1051/proc/202477195)
    DOI : 10.1051/proc/202477195
  • Stability of a cross-diffusion system and approximation by repulsive random walks: a duality approach
    • Bansaye Vincent
    • Moussa Ayman
    • Muñoz-Hernández Felipe
    Journal of the European Mathematical Society, European Mathematical Society, 2024. We consider conservative cross-diffusion systems for two species where individual motion rates depend linearly on the local density of the other species. We develop duality estimates and obtain stability and approximation results. We first control the time evolution of the gap between two bounded solutions by means of its initial value. As a by product, we obtain a uniqueness result for bounded solutions valid for any space dimension, under a non-perturbative smallness assumption. Using a discrete counterpart of our duality estimates, we prove the convergence of random walks with local repulsion in one dimensional discrete space to cross-diffusion systems. More precisely, we prove quantitative estimates for the gap between the stochastic process and the cross-diffusion system. We give first rough but general estimates; then we use the duality approach to obtain fine estimates under less general conditions. (10.4171/jems/1540)
    DOI : 10.4171/jems/1540
  • Controlled superprocesses and HJB equation in the space of finite measures
    • Ocello Antonio
    , 2023. This paper introduces the formalism required to analyze a certain class of stochastic control problems that involve a super diffusion as the underlying controlled system. To establish the existence of these processes, we show that they are weak scaling limits of controlled branching processes. First, we prove a generalized Itô's formula for this dynamics in the space of finite measures, using the differentiation in the space of finite positive measures. This lays the groundwork for a PDE characterization of the value function of a control problem, which leads to a verification theorem. Finally, focusing on an exponential-type value function, we show how a regular solution to a finite--dimensional HJB equation can be used to construct a smooth solution to the HJB equation in the space of finite measures, via the so-called branching property technique.
  • A Kesten Stigum theorem for Galton-Watson processes with infinitely many types in a random environment
    • Ligonnière Maxime
    , 2024. In this paper, we study a Galton-Watson process $(Z_n)$ with infinitely many types in a random ergodic environment $\bar{\xi}=(\xi_n)_{n\geq 0}$. We focus on the supercritical regime of the process, where the quenched average of the size of the population grows exponentially fast to infinity. We work under Doeblin-type assumptions coming from a previous paper, which ensure that the quenched mean semi group of $(Z_n)$ satisfies some ergodicity property and admits a $\bar{\xi}$-measurable family of space-time harmonic functions. We use these properties to derive an associated nonnegative martingale $(W_n)$. Under a $L\log(L)^{1+\varepsilon}$-integrabilty assumption on the offspring distribution, we prove that the almost sure limit $W$ of the martingale $(W_n)$ is not degenerate. Assuming some uniform $L^2$-integrability of the offspring distribution, we prove that conditionally on $\{W&gt;0\}$, at a large time $n$, both the size of the population and the distribution of types correspond to those of the quenched mean of the population $\mathbb{E}[Z_n|\bar{\xi}, Z_0]$. We finally introduce an example of a process modelling a population with a discrete age structure. In this context, we provide more tractable criterions which guarantee our various assumptions are met.
  • Stochastic optimal transport and Hamilton-Jacobi-Bellman equations on the set of probability measures
    • Bertucci Charles
    Annales de l'Institut Henri Poincaré (C), Analyse non linéaire, EMS, 2024. We introduce a stochastic version of the optimal transport problem. We provide an analysis by means of the study of the associated Hamilton-Jacobi-Bellman equation, which is set on the set of probability measures. We introduce a new definition of viscosity solutions of this equation, which yields general comparison principles, in particular for cases involving terms modeling stochasticity in the optimal control problem. We are then able to establish results of existence and uniqueness of viscosity solutions of the Hamilton-Jacobi-Bellman equation. These results rely on controllability results for stochastic optimal transport that we also establish. (10.4171/AIHPC/138)
    DOI : 10.4171/AIHPC/138
  • Short Paper - Quadratic minimization: from conjugate gradient to an adaptive Polyak’s momentum method with Polyak step-sizes
    • Goujaud Baptiste
    • Taylor Adrien
    • Dieuleveut Aymeric
    Open Journal of Mathematical Optimization, Centre Mersenne, 2024, 5, pp.1-10. In this work, we propose an adaptive variation on the classical Heavy-ball method for convex quadratic minimization. The adaptivity crucially relies on so-called “Polyak step-sizes”, which consists of using the knowledge of the optimal value of the optimization problem at hand instead of problem parameters such as a few eigenvalues of the Hessian of the problem. This method happens to also be equivalent to a variation of the classical conjugate gradient method, and thereby inherits many of its attractive features, including its finite-time convergence, instance optimality, and its worst-case convergence rates. The classical gradient method with Polyak step-sizes is known to behave very well in situations in which it can be used, and the question of whether incorporating momentum in this method is possible and can improve the method itself appeared to be open. We provide a definitive answer to this question for minimizing convex quadratic functions, an arguably necessary first step for developing such methods in more general setups. (10.5802/ojmo.36)
    DOI : 10.5802/ojmo.36
  • Bridging Rayleigh-Jeans and Bose-Einstein condensation of a guided fluid of light with positive and negative temperatures
    • Zanaglia Lucas
    • Garnier Josselin
    • Rica Sergio
    • Kaiser Robin
    • Wabnitz Stefano
    • Michel Claire
    • Doya Valerie
    • Picozzi Antonio
    Physical Review A, American Physical Society, 2024, 110, pp.063530. We consider the free propagation geometry of a light beam (or fluid of light) in a multimode waveguide. As a result of the effective photon-photon interactions, the photon fluid thermalizes to an equilibrium state during its conservative propagation. In this configuration, Rayleigh-Jeans (RJ) thermalization and condensation of classical light waves have been recently observed experimentally in graded index multimode optical fibers characterized by a 2D parabolic trapping potential. As well-known, the properties of RJ condensation differ substantially from those of Bose-Einstein (BE) condensation: The condensate fraction decreases quadratically with the temperature for BE condensation, while it decreases linearly for RJ condensation. Furthermore, for quantum particles the heat capacity tends to zero at small temperatures, and it takes a constant value in the classical particle limit at high temperatures. This is in contrast with classical RJ waves, where the specific heat takes a constant value at small temperatures, and tends to vanish above the condensation transition in the normal (uncondensed) state. Here, we reconcile the thermodynamic properties of BE and RJ condensation: By introducing a frequency cut-off inherent to light propagation in a waveguide, we derive generalized expressions of the thermodynamic properties that include the RJ and BE limits as particular cases. We extend the approach to encompass negative temperatures. In contrast to positive temperatures, the specific heat does not display a singular behavior at negative temperatures, reflecting the non-critical nature of the transition to a macroscopic population of the highest energy level. Our work contributes to understanding the quantum-to-classical crossover in the equilibrium properties of light, within a versatile experimental platform based on nonlinear optical propagation in multimode waveguides.
  • Large-Eddy Simulation of solid/fluid heat and mass transfer applied to the thermal degradation of composite material for fire certification applications
    • Moureau Vincent
    • Letournel Roxane
    • Bioche Kévin
    • Dellinger Nicolas
    • Grenouilloux Adrien
    , 2024. With the current trend towards improved aircraft efficiency, carbon fiber reinforced polymers (CFRP) have increasingly been used for both fuselage and nacelle’s fairing. A critical part of the design phase remains the fire certification of current and future components. Current international standards, such as the FAR25.856(b):2003 and ISO2685:1998(e), ensure the thermal resistance of these materials when submitted to high-heat loads. Still, certification test campaigns are costly and often require a long time for their set-up. The introduction of novel numerical tools for the prediction of the degraded material properties could improve the design process by providing supplementary inputs. For the past decades, Large-Eddy Simulation (LES) has become a valuable tool for the simulation of unsteady reactive flows. Several Conjugate Heat-Transfer (CHT) approaches have been performed to address the unsteady interactions between a fluid and a solid solver [1]. These efforts have allowed to estimate the impact of the flame on the temperature distribution of solid geometries. However, the number of studies addressing the interaction of a flame leading to the degradation of a composite material is limited [2]. As a matter of fact, the resolution of this type of physics requires in-depth knowledge of the material's behavior over a wide range of temperatures. Properties such as the spatial arrangement of the different constituents have a major impact on heat conduction and therefore on degradation kinetics triggered at higher temperatures. In the present paper, a framework for the simulation of fire certification test under realistic conditions is presented. In relies on the coupling between a fluid, radiation and a solid solver, capable of respectively solving for the reactive, radiative heat losses and the thermal degradation of a composite material.
  • Reconstructing discrete measures from projections. Consequences on the empirical Sliced Wasserstein Distance
    • Tanguy Eloi
    • Flamary Rémi
    • Delon Julie
    Comptes Rendus. Mathématique, Académie des sciences (Paris), 2024, 362, pp.1121--1129. This paper deals with the reconstruction of a discrete measure γ Z on R d from the knowledge of its pushforward measures P i #γ Z by linear applications P i : R d → R di (for instance projections onto subspaces). The measure γ Z being fixed, assuming that the rows of the matrices P i are independent realizations of laws which do not give mass to hyperplanes, we show that if i d i &gt; d, this reconstruction problem has almost certainly a unique solution. This holds for any number of points in γ Z. A direct consequence of this result is an almost-sure separability property on the empirical Sliced Wasserstein distance.
  • The shortest telomere in telomerase-negative cells triggers replicative senescence at a critical threshold length and also fuels genomic instability
    • Berardi Prisca
    • Martinez Fernandez Veronica
    • Rat Anaïs
    • Rosas Bringas Fernando
    • Jolivet Pascale
    • Langston Rachel
    • Mattarocci Stefano
    • Maes Alexandre
    • Aspert Théo
    • Zeinoun Bechara
    • Casier Karine
    • Kazemier Hinke
    • Charvin Gilles
    • Doumic Marie
    • Chang Michael
    • Teixeira Maria Teresa
    , 2025. In the absence of telomerase, telomere shortening triggers the DNA damage checkpoint and replicative senescence, a potent tumor suppressor mechanism. Paradoxically, this same process is also associated with oncogenic genomic instability. Yet, the precise mechanism that connects these seemingly opposing forces remains poorly understood. To directly study the complex interplay between senescence, telomere dynamics and genomic instability, we developed a system in Saccharomyces cerevisiae to generate and track the dynamics of telomeres of precise length in the absence of telomerase. Using single-telomere and single- cell analyses combined with mathematical modeling, we identify a threshold length at which telomeres switch into dysfunction. A single shortest telomere below the threshold length is necessary and sufficient to trigger the onset of replicative senescence in a majority of cells. At population level, fluctuation assays establish that rare genomic instability arises predominantly in cis to the shortest telomere as non-reciprocal translocations that result in re- elongation of the shortest telomere and likely escape from senescence. The switch of the shortest telomere into dysfunction and subsequent processing in telomerase-negative cells thus serves as the mechanistic link between replicative senescence onset, genomic instability and the initiation of post-senescence survival, explaining the contradictory roles of replicative senescence in oncogenesis. (10.1101/2025.01.27.635053)
    DOI : 10.1101/2025.01.27.635053
  • Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
    • Akhavan Arya
    • Chzhen Evgenii
    • Pontil Massimiliano
    • Tsybakov Alexandre B.
    Journal of Machine Learning Research, Microtome Publishing, 2024. This work studies minimization problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth and possibly satisfies additional properties. We consider two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator. The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016). We present an improved analysis of this algorithm on the class of highly smooth and strongly convex functions studied in the prior work, and we derive rates of convergence for two more general classes of non-convex functions. Namely, we consider highly smooth functions satisfying the Polyak-{\L}ojasiewicz condition and the class of highly smooth functions with no additional property. The second algorithm is based on randomization over the $\ell_1$ sphere, and it extends to the highly smooth setting the algorithm that was recently proposed for Lipschitz convex functions in Akhavan et al. (2022). We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than the $\ell_2$ randomization and the commonly used Gaussian randomization algorithms, while in the noisy case both $\ell_1$ and $\ell_2$ algorithms benefit from similar improved theoretical guarantees. The improvements are achieved thanks to a new proof techniques based on Poincar\'e type inequalities for uniform distributions on the $\ell_1$ or $\ell_2$ spheres. The results are established under weak (almost adversarial) assumptions on the noise. Moreover, we provide minimax lower bounds proving optimality or near optimality of the obtained upper bounds in several cases.
  • Stochastic mesoscale characterization of ablative materials for atmospheric entry
    • Girault Florian
    • Torres Herrador Francisco
    • Helber Bernd
    • Turchi Alessandro
    • Magin Thierry
    • Congedo Pietro Marco
    Applied Mathematical Modelling, Elsevier, 2024, 135, pp.745-758. This work aims to estimate material properties at a mesoscopic level using the PuMA Software from an uncertainty quantification perspective. The stochastic behavior of PuMA, mainly related to the random distribution of the fibers, is an intrinsic source of uncertainty. The choice of some physical parameters, such as the fiber’s thermal conductivity, is an additional source of uncertainties. The first contribution is a low-cost surrogate-based methodology with an unequal allocation scheme applied for the first time to the stochastic mesoscale characterization of ablative materials. A second contribution of this work is the uncertainty propagation and sensitivity analysis of the material properties, which also yields a systematic assessment of the choice of the voxel resolution for both the fibers and the domain. Precisely, the convergence of the quantities of interest can be surveyed, thus identifying the minimal reference elementary volume. (10.1016/j.apm.2024.07.027)
    DOI : 10.1016/j.apm.2024.07.027
  • Joint Channel Selection using FedDRL in V2X
    • Mancini Lorenzo
    • Labbi Safwan
    • Abed-Meraim Karim
    • Boukhalfa Fouzi
    • Durmus Alain
    • Mangold Paul
    • Moulines Éric
    , 2024. (10.1109/mecom61498.2024.10881343)
    DOI : 10.1109/mecom61498.2024.10881343
  • A scenario for an evolutionary selection of ageing
    • Roget Tristan
    • Macmurray Claire
    • Jolivet Pierre
    • Meleard Sylvie
    • Rera Michael
    eLife, eLife Sciences Publication, 2024, 13, pp.RP92914. Signs of ageing become apparent only late in life, after organismal development is finalized. Ageing, most notably, decreases an individual’s fitness. As such, it is most commonly perceived as a non-adaptive force of evolution and considered a by-product of natural selection. Building upon the evolutionarily conserved age-related Smurf phenotype, we propose a simple mathematical life-history trait model in which an organism is characterized by two core abilities: reproduction and homeostasis. Through the simulation of this model, we observe (1) the convergence of fertility’s end with the onset of senescence, (2) the relative success of ageing populations, as compared to non-ageing populations, and (3) the enhanced evolvability (i.e. the generation of genetic variability) of ageing populations. In addition, we formally demonstrate the mathematical convergence observed in (1). We thus theorize that mechanisms that link the timing of fertility and ageing have been selected and fixed over evolutionary history, which, in turn, explains why ageing populations are more evolvable and therefore more successful. Broadly speaking, our work suggests that ageing is an adaptive force of evolution. (10.7554/eLife.92914.3)
    DOI : 10.7554/eLife.92914.3
  • Joint SPX-VIX calibration with Gaussian polynomial volatility models: deep pricing with quantization hints
    • Abi Jaber Eduardo
    • Illand Camille
    • Li Shaun Xiaoyuan
    Mathematical Finance, Wiley, 2024. We consider the joint SPX-VIX calibration within a general class of Gaussian polynomial volatility models in which the volatility of the SPX is assumed to be a polynomial function of a Gaussian Volterra process defined as a stochastic convolution between a kernel and a Brownian motion. By performing joint calibration to daily SPX-VIX implied volatility surface data between 2012 and 2022, we compare the empirical performance of different kernels and their associated Markovian and non-Markovian models, such as rough and non-rough pathdependent volatility models. In order to ensure an efficient calibration and a fair comparison between the models, we develop a generic unified method in our class of models for fast and accurate pricing of SPX and VIX derivatives based on functional quantization and Neural Networks. For the first time, we identify a conventional one-factor Markovian continuous stochastic volatility model that is able to achieve remarkable fits of the implied volatility surfaces of the SPX and VIX together with the term structure of VIX futures. What is even more remarkable is that our conventional one-factor Markovian continuous stochastic volatility model outperforms, in all market conditions, its rough and non-rough path-dependent counterparts with the same number of parameters.