Share

Publications

Publications

CMAP Theses  are available by following this link:
Discover CMAP theses

Listed below, are sorted by year, the publications appearing in the HAL open archive.

2026

  • Autoregressive Multiplier Bootstrap for In-situ Error Estimation and Quality Monitoring of Finite Time Averages in Turbulent Flow Simulations
    • Papagiannis Christos
    • Balarac Guillaume
    • Congedo Pietro Marco
    • Le Maître Olivier P
    Computer Methods in Applied Mechanics and Engineering, Elsevier, 2026, 452, pp.118664. In Computational Fluid Dynamics (CFD), and particularly within Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES), the computational cost is largely dictated by the effort required to obtain statistically converged quantities such as time-averaged fields and higher-order moments. Despite the importance of accurately quantifying statistical uncertainty in unsteady simulations, no continuous and cost-effective, on-line method currently exists for monitoring the convergence quality of such statistics during runtime. This work introduces a novel, fully on-line bootstrapping approach to estimate the variance of finite-time averages without requiring the estimation of the flows Auto-Correlation Function (ACF). Unlike existing methods that rely on ACF estimation, which are often impractical due to excessive storage demands in large-scale simulations, or require off-line processing or a priori modeling assumptions, our method operates entirely during the simulation and incurs minimal overhead. The proposed technique employs a recursive update of bootstrap replicates of the time average, using correlated random weights generated via an autoregressive model. This formulation is computationally efficient: the update cost scales linearly with the number of bootstrap replicates and the dimensionality of the flow field, and the autoregressive model is inexpensive to evaluate. The method only requires storage of a small number of fields, making it suitable for large-scale CFD applications. We demonstrate the effectiveness of the approach on synthetic data from the Ornstein-Uhlenbeck process and on two canonical LES cases: a turbulent pipe flow and a round jet. We further discuss the methods applicability to simulations with non-uniform time stepping, highlighting its flexibility and robustness. (10.1016/j.cma.2025.118664)
    DOI : 10.1016/j.cma.2025.118664
  • Robust a posteriori estimation of probit-lognormal seismic fragility curves via sequential design of experiments and constrained reference prior
    • Van Biesbroeck Antoine
    • Gauchy Clément
    • Feau Cyril
    • Garnier Josselin
    Nuclear Engineering and Design, Elsevier, 2026, 448, pp.114695. A seismic fragility curve expresses the probability of failure of a structure conditional to an intensity measure (IM) derived from seismic signals. When only limited data is available, the practitioner often refers to the probit-lognormal model coupled with maximum likelihood estimation (MLE) to obtain estimates of these curves. This means that only a binary indicator of the state (BIS) of the structure is known, namely a failure or non-failure state indicator, when it is subjected to a seismic signal with an intensity measure IM. In this context, the objective of this work is to propose a method for optimally estimating such curves by obtaining the most precise estimate possible with the minimum of data. The novelty of our work is twofold. First, we present and show how to mitigate the likelihood degeneracy problem which is ubiquitous with small data sets and hampers frequentist approaches such as MLE. Second, we propose a novel strategy for sequential design of experiments (DoE) that selects seismic signals from a large database of synthetic or real signals via their IM values, to be applied to structures to evaluate the corresponding BISs. This strategy relies on a criterion based on information theory in a Bayesian framework. It therefore aims to sequentially designate the IM value such that the pair (IM, BIS) has on average, with respect to the BIS of the structure, the greatest impact on the posterior distribution of the fragility curve. The methodology is applied to a case study from the nuclear industry. The results demonstrate its ability to efficiently and robustly estimate the fragility curve, and to avoid degeneracy even with a limited amount of data, i.e., less than 100. Furthermore, we demonstrate that the estimates quickly reach the model bias induced by the probit-lognormal modeling. Eventually, two criteria are suggested to help the user stop the DoE algorithm. (10.1016/j.nucengdes.2025.114695)
    DOI : 10.1016/j.nucengdes.2025.114695
  • Bigraded Castelnuovo-Mumford regularity and Groebner bases
    • Bender Matías R
    • Busé Laurent
    • Checa Carles
    • Tsigaridas Elias
    Journal of Symbolic Computation, Elsevier, 2026, 133, pp.26. We study the relation between the bigraded Castelnuovo-Mumford regularity of a bihomogeneous ideal $I$ in the coordinate ring of the product of two projective spaces and the bidegrees of a Groebner basis of $I$ with respect to the degree reverse lexicographical monomial order in generic coordinates. For the single-graded case, Bayer and Stillman unraveled all aspects of this relationship forty years ago and these results led to complexity estimates for computations with Groebner bases. We build on this work to introduce a bounding region of the bidegrees of minimal generators of bihomogeneous Groebner bases for $I$. We also use this region to certify the presence of some minimal generators close to its boundary. Finally, we show that, up to a certain shift, this region is related to the bigraded Castelnuovo-Mumford regularity of $I$. (10.1016/j.jsc.2025.102487)
    DOI : 10.1016/j.jsc.2025.102487
  • Robust a posteriori estimation of probit-lognormal seismic fragility curves via sequential design of experiments and constrained reference prior
    • Van Biesbroeck Antoine
    • Gauchy Clément
    • Feau Cyril
    • Garnier Josselin
    Nuclear Engineering and Design, Elsevier, 2026, 448, pp.114695. A seismic fragility curve expresses the probability of failure of a structure conditional to an intensity measure (IM) derived from seismic signals. When only limited data is available, the practitioner often refers to the probit-lognormal model coupled with maximum likelihood estimation (MLE) to obtain estimates of these curves. This means that only a binary indicator of the state (BIS) of the structure is known, namely a failure or non-failure state indicator, when it is subjected to a seismic signal with an intensity measure IM. In this context, the objective of this work is to propose a method for optimally estimating such curves by obtaining the most precise estimate possible with the minimum of data. The novelty of our work is twofold. First, we present and show how to mitigate the likelihood degeneracy problem which is ubiquitous with small data sets and hampers frequentist approaches such as MLE. Second, we propose a novel strategy for sequential design of experiments (DoE) that selects seismic signals from a large database of synthetic or real signals via their IM values, to be applied to structures to evaluate the corresponding BISs. This strategy relies on a criterion based on information theory in a Bayesian framework. It therefore aims to sequentially designate the IM value such that the pair (IM, BIS) has on average, with respect to the BIS of the structure, the greatest impact on the posterior distribution of the fragility curve. The methodology is applied to a case study from the nuclear industry. The results demonstrate its ability to efficiently and robustly estimate the fragility curve, and to avoid degeneracy even with a limited amount of data, i.e., less than 100. Furthermore, we demonstrate that the estimates quickly reach the model bias induced by the probit-lognormal modeling. Eventually, two criteria are suggested to help the user stop the DoE algorithm. (10.1016/j.nucengdes.2025.114695)
    DOI : 10.1016/j.nucengdes.2025.114695
  • Asymptotic approaches in inverse problems for depolymerization estimation
    • Doumic Marie
    • Moireau Philippe
    Inverse Problems and Imaging, AIMS American Institute of Mathematical Sciences, 2026, 20, pp.105-155. Depolymerization reactions constitute frequent experiments, for instance in biochemistry for the study of amyloid fibrils. The quantities experimentally observed are related to the time dynamics of a quantity averaged over all polymer sizes, such as the total polymerised mass or the mean size of particles. The question analysed here is to link this measurement to the initial size distribution. To do so, we first derive, from the initial reaction system<p>two asymptotic models: at first order, a backward transport equation, and at second order, an advection-diffusion/Fokker-Planck equation complemented with a mixed boundary condition at x = 0. We estimate their distance to the original system solution. We then turn to the inverse problem, i.e., how to estimate the initial size distribution from the time measurement of an average quantity, given by a moment of the solution. This question has been already studied for the first order asymptotic model, and we analyse here the second order asymptotic. Thanks to Carleman inequalities and to log-convexity estimates, we prove observability results and error estimates for a Tikhonov regularization.</p><p>We then develop a Kalman-based observer approach, and implement it on simulated observations. Despite its severely ill-posed character, the secondorder approach appears numerically more accurate than the first-order one.</p> (10.3934/ipi.2025020)
    DOI : 10.3934/ipi.2025020
  • A functional inequalities approach for the field-road diffusion model with (symmetric) nonlinear exchanges
    • Alfaro Matthieu
    • Chainais-Hillairet Claire
    • Nabet Flore
    , 2026. In this note, we consider the so-called field-road diffusion model in a bounded domain, consisting of two parabolic PDEs posed on sets of different dimensions and coupled through (symmetric) nonlinear exchange terms. We propose a new and rather direct functional inequalities approach to prove the exponential decay of a relative entropy, and thus the convergence of the solution towards the stationary state selected by the total mass of the initial datum.
  • Subadditivity and optimal matching of unbounded samples
    • Caglioti Emanuele
    • Goldman Michael
    • Pieroni Francesca
    • Trevisan Dario
    , 2026. We obtain new bounds for the optimal matching cost for empirical measures with unbounded support. For a large class of radially symmetric and rapidly decaying probability laws, we prove for the first time the asymptotic rate of convergence for the whole range of power exponents $p$ and dimensions $d$. Moreover we identify the exact prefactor when $p\le d$. We cover in particular the Gaussian case, going far beyond the currently known bounds. Our proof technique is based on approximate sub- and super-additivity bounds along a geometric decomposition adapted to some features the density, such as its radial symmetry and its decay at infinity.
  • Long-time behaviour of a multidimensional age-dependent branching process with a singular jump kernel modelling telomere shortening
    • Olayé Jules
    • Tomasevic Milica
    Electronic Journal of Probability, Institute of Mathematical Statistics (IMS), 2026, 31. In this article, we investigate the ergodic behaviour of a multidimensional age-dependent branching process with a singular jump kernel, motivated by studying the phenomenon of telomere shortening in cell populations. Our model tracks individuals evolving within a continuous-time framework indexed by a binary tree, characterised by age and a multidimensional trait. Branching events occur with rates dependent on age, where offspring inherit traits from their parent with random increase or decrease in some coordinates, while the most of them are left unchanged. Exponential ergodicity is obtained at the cost of an exponential normalisation, despite the fact that we have an unbounded age-dependent birth rate that may depend on the multidimensional trait, and a non-compact transition kernel. These two difficulties are respectively treated by stochastically comparing our model to Bellman-Harris processes, and by using a weak form of a Harnack inequality. We conclude this study by giving examples where the assumptions of our main result are verified. (10.1214/25-EJP1469)
    DOI : 10.1214/25-EJP1469
  • On the simulation of extreme events with neural networks
    • Allouche Michaël
    • Girard Stéphane
    • Gobet Emmanuel
    , 2026. This article aims at investigating the use of generative methods based on neural networks to simulate extreme events. Although very popular, these methods are mainly invoked in empirical works. Therefore, providing theoretical guidelines for using such models in extreme values context is of primal importance. To this end, we propose an overview of most recent generative methods dedicated to extremes, giving some theoretical and practical tips on their tail behaviour thanks to both extreme-value and copula tools.
  • Finite element modelling for the reproduction of dynamic OCE measurements in the cornea
    • Merlini Giulia
    • Imperiale Sébastien
    • Allain Jean-Marc
    Journal of the Mechanics and Physics of Solids, Elsevier, 2026, 206, pp.106363. Recent advances in dynamic elastography, particularly through optical coherence tomography combined with transient excitations have enabled rapid, localized, and non-invasive mechanical data acquisition of the cornea. This dataopens the path to early-detection of pathologies and more accurate treatment. However, the analysis of the wave propagation is a complex mechanical problem: the cornea is a structure under pressure, with non-linear material behavior. Thus, computational analysis are needed to extract mechanical parameters from the data. In this study, we present a time-dependent finite element model for the reproduction of transient shear wave elastographic measurements in the cornea. The mechanical problem consists in a smallamplitude wave propagating in the cornea, largely deformed by intraocular pressure in physiological conditions. The model accounts for anisotropic, hyperelastic, and incompressible behavior of the cornea, as well as its accurate geometry, and the preloaded condition. We have implemented two different numerical approaches to solve first the static non-linear inflation of the cornea and then the linear wave propagation problem to reproduce the measurements. We investigate the impact of material anisotropy and prestress on wave propagation and demonstrate that intraocular pressure critically influences shear wave velocity. Additionally, by introducing a localized mechanical defect to simulate a pathological defect, we show that simulated shear wave can detect and quantify mechanical weaknesses, suggesting potential as a diagnostic tool to assess corneal health. (10.1016/j.jmps.2025.106363)
    DOI : 10.1016/j.jmps.2025.106363
  • Nonlinear model calibration through bifurcation curves
    • Mélot Adrien
    • Denimal Goy Enora
    • Renson Ludovic
    Mechanical Systems and Signal Processing, Elsevier, 2026, 242, pp.113589. Nonlinear systems exhibit a plethora of complex dynamic behaviours that are difficult to model and predict accurately. This difficulty often arises from a lack of knowledge of the physics that induces the nonlinear behaviours and the strong sensitivity of the nonlinear dynamics to parameter variation. We introduce in this paper a methodology to carry out nonlinear model updating based on bifurcations. The proposed approach involves minimising the distance between experimental and numerical bifurcation curves, which are key dynamic features that define stability boundaries and regions of multi-stability. For the model, bifurcation curves are computed via standard numerical bifurcation tracking analyses. In the experiment, we use control-based continuation to obtain the data. The approach is first demonstrated on a Duffing and a beam system using synthetic data, before being applied to experimental data collected on a base-excited energy harvester with magnetic nonlinearity.
  • Quantitative sensitivity analysis for Fokker-Planck equation with respect to the Wasserstein distance
    • Morange Martin
    , 2026. We analyze the sensitivity of solutions to the Fokker-Planck equation with respect to some unknown parameter. Our main result is to provide quantitative upper bounds for the $p$-Wasserstein distance $\mathcal{W}_p$ between two solutions with different parameters, for every $p \geq 2$. We are able to give two proofs of this result, the first relying on synchronous coupling between two solutions of an SDE, and another one that relies on the differentiation of Kantorovitch dual formulation of optimal transport. We also provide more specific bounds in the case of the overdamped Langevin process, for which we are able to compare convergence to the invariant measure and sensitivity to the parameter.