Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2025

  • Error-Based mesh selection for efficient numerical simulations with variable parameters
    • Dornier Hugo
    • Le Maître Olivier P
    • Congedo Pietro Marco
    • Salah El Din Itham
    • Marty Julien
    • Bourasseau Sébastien
    , 2025. Advanced numerical simulations often depend on mesh refinement techniques to manage discretization errors in complex models and reduce computational costs. This work concentrates on Adaptive Mesh Refinement (AMR) for steady-state solutions, which uses error estimators to iteratively refine the mesh locally and gradually tailor it to the solution. AMR requires evaluating the solution across a series of meshes. When solving the model for multiple operating conditions, such as in uncertainty quantification studies, full systematic adaptation can cause significant computational overhead. To mitigate this, the Error-based Mesh Selection (EMS) method is introduced to decrease the cost of adaptation. For each operating condition, EMS seeks to choose, from a library of pre-adapted meshes, the one that minimizes the discretization error. A key feature of this approach is the use of Gaussian Process models to predict the solution errors for each mesh in the library. These error models are built solely from the library's meshes and their solutions, using restriction errors as proxies for discretization errors, thereby avoiding additional model evaluations. The EMS method is tested on an analytical shock problem and a supersonic scramjet configuration, showing near-optimal mesh selection. The influence of library size on the resulting error level is also examined.
  • On the stability of the invariant probability measures of McKean-Vlasov equations
    • Cormier Quentin
    Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2025, 61 (4). We study the long-time behavior of some McKean-Vlasov stochastic differential equations used to model the evolution of large populations of interacting agents. We give conditions ensuring the local stability of an invariant probability measure. Lions derivatives are used in a novel way to obtain our stability criteria. We obtain results for non-local McKean-Vlasov equations on $\mathbb{R}^d$ and for McKean-Vlasov equations on the torus where the interaction kernel is given by a convolution. On $\mathbb{R}^d$, we prove that the location of the roots of an analytic function determines the stability. On the torus, our stability criterion involves the Fourier coefficients of the interaction kernel. In both cases, we prove the convergence in the Wasserstein metric $W_1$ with an exponential rate of convergence. (10.1214/24-AIHP1504)
    DOI : 10.1214/24-AIHP1504
  • Scaling limits for a population model with growth, division and cross-diffusion
    • Doumic Marie
    • Hecht Sophie
    • Hoffmann Marc
    • Peurichard Diane
    Mathematical Models and Methods in Applied Sciences, World Scientific Publishing, 2025, 35 (12), pp.2611-2660. Originally motivated by the morphogenesis of bacterial microcolonies, the aim of this article is to explore models through different scales for a spatial population of interacting, growing and dividing particles. We start from a microscopic stochastic model, write the corresponding stochastic differential equation satisfied by the empirical measure, and rigorously derive its mesoscopic (mean-field) limit. Under smoothness and symmetry assumptions for the interaction kernel, we then obtain entropy estimates, which provide us with a localization limit at the macroscopic level. Finally, we perform a thorough numerical study in order to compare the three modeling scales. (10.1142/S0218202525500472)
    DOI : 10.1142/S0218202525500472
  • Wave turbulence, thermalization and multimode locking in optical fibers
    • Ferraro Mario
    • Baudin Killian
    • Gervaziev Mikhail D
    • Fusaro Adrien
    • Picozzi A.
    • Garnier Josselin
    • Millot G.
    • Kharenko Denis S.
    • Podivilov Evgeniy V.
    • Babin Sergey A.
    • Mangini Fabio
    • Wabnitz Stefan W
    Physica D: Nonlinear Phenomena, Elsevier, 2025, 481, pp.134758. We present a comprehensive overview of recent advances in theory and experiments on complex light propagation phenomena in nonlinear multimode fibers. On the basis of the wave turbulence theory, we derive kinetic equations describing the out-of-equilibrium process of optical thermalization toward the Rayleigh–Jeans (RJ) equilibrium distribution. Our theory is applied to explain the effect of beam self-cleaning (BSC) in graded-index (GRIN) fibers, whereby a speckled beam transforms into a bell-shaped beam at the fiber output as the input peak power grows larger. Although the output beam is typically dominated by the fundamental mode of the fiber, higher-order modes (HOMs) cannot be fully depleted, as described by the turbulence cascades associated to the conserved quantities. We theoretically explore the role of random refractive index fluctuations along the fiber core, and show how these imperfections may turn out to assist the observation of BSC in a practical experimental setting. This conclusion is supported by the derivation of wave turbulence kinetic equations that account for the presence of a time-dependent disorder (random mode coupling). The kinetic theory reveals that a weak disorder accelerates the rate of RJ thermalization and beam cleaning condensation. On the other hand, although strong disorder is expected to suppress wave condensation, the kinetic equation reveals that an out-of-equilibrium process of condensation and RJ thermalization can occur in a regime where disorder predominates over nonlinearity. In general, the kinetic equations are validated by numerical simulations of the generalized nonlinear Schrodinger equation. We outline a series of recent experiments, which permit to confirm the statistical mechanics approach for describing beam propagation and thermalization. For example, we highlight the demonstration of entropy growth, and point out that there are inherent limits to peak-power scaling in multimode fiber lasers. We conclude by pointing out the experimental observation that BSC is accompanied by an effect of modal phase-locking. From the one hand this explains the observed preservation of the spatial coherence of the beam, but also it points to the need of extending current descriptions in future research. (10.1016/j.physd.2025.134758)
    DOI : 10.1016/j.physd.2025.134758
  • Sensitivity analysis of a flow redistribution model for a multidimensional and multifidelity simulation of fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    • de Lambert Stanislas
    • Garnier Josselin
    • Leturcq Bertrand
    • Lamorte Nicolas
    Nuclear Engineering and Design, Elsevier, 2025, 443, pp.114259. In the core of nuclear reactors, fluid–structure interaction and intense irradiation lead to progressive deformation of fuel assemblies. When this deformation is significant, it can lead to additional costs and longer fuel unloading and reloading operations. Therefore, it is preferable to adopt a fuel management that avoids excessive deformation and interactions between fuel assemblies. However, the prediction of deformation and interactions between fuel assemblies is uncertain. Uncertainties affect neutronics, thermohydraulics and thermomechanics parameters. Indeed, the initial uncertainties are propagated over several successive power cycles of twelve months each through the coupling of non-linear, nested and multidimensional thermal–hydraulic and thermomechanical simulations. In this article, we set out to study the hydraulic contribution and quantify the associated uncertainty. To achieve this objective, we develop a multi-stage approach to carry out an initial sensitivity analysis, highlighting the most influential parameters in the hydraulic model. By optimally adjusting these parameters, we aim to obtain a more accurate description of the flow redistribution phenomenon in the reactor core. The aim of the sensitivity analysis presented in this article is to construct an accurate and suitable surrogate model that represents the in-core lateral hydraulic forces in a given state. This surrogate model could then be coupled with a thermomechanical model to quantify the final uncertainty in the simulation of fuel assembly bow within a pressurized water reactor. This approach will provide a better understanding of the interactions between hydraulic and thermomechanical phenomena, thereby improving the reliability and accuracy of the simulation results. (10.1016/j.nucengdes.2025.114259)
    DOI : 10.1016/j.nucengdes.2025.114259
  • Sensitivity analysis of a flow redistribution model for a multidimensional and multifidelity simulation of fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    • de Lambert Stanislas
    • Garnier Josselin
    • Leturcq Bertrand
    • Lamorte Nicolas
    Nuclear Engineering and Design, Elsevier, 2025, 443, pp.114259. (10.1016/j.nucengdes.2025.114259)
    DOI : 10.1016/j.nucengdes.2025.114259
  • Interactions and opportunities at the crossroads of deep probabilistic modeling and statistical inference through Markov Chains Monte Carlo
    • Grenioux Louis
    , 2025. This thesis advances the field of sampling, a cornerstone of Bayesian inference, computational physics, and probabilistic modeling, where the goal is to generate samples from a known probability density. A related challenge arises in generative modeling, which seeks to produce new data resembling a given dataset, a problem that has seen major breakthroughs through recent advances in deep learning. The central aim of this work is to leverage modern generative models to enhance classical sampling frameworks. The study begins by examining the inherent difficulties of multi-modal sampling, identifying key limitations of both classical and advanced Monte Carlo methods. It then explores the integration of pre-trained normalizing flows into traditional Monte Carlo schemes, providing practical guidance on their performance across diverse target distributions. Building on this, diffusion models are incorporated into advanced annealed Monte Carlo methods, revealing both their potential and their limitations. The work also investigates how diffusion models can be embedded within a variational inference framework. In parallel, it proposes a learning-free diffusion-based sampler that replaces neural approximators with Monte Carlo estimators. Finally, these enhanced sampling strategies are applied to the training of energy-based models, introducing a novel algorithm in which a normalizing flow serves as an auxiliary sampler to facilitate the training of these expressive yet challenging generative models.
  • High Performance Krylov Subspace Solvers with Preconditioning and Deflation
    • Spillane Nicole
    , 2025. Linear solvers are an essential tool for the accurate numerical simulation of large scale problems. My work addresses the analysis, development and application of linear solvers for problems that arise in science or the industry. My particular focus is on a class of iterative linear solvers called the Krylov subspace methods. Together with the accelerators that are preconditioning and deflation, Krylov subspace methods offer a flexible framework for a wide range of problem. The contributions in this manuscript divide into three main topics. A first set of results is for symmetric positive definite matrices solved by the conjugate gradient method preconditioned by domain decomposition. A unified theory of spectral coarse spaces is developed, a new fully algebraic solver is introduced, and open-source software is made available to apply these methods. The second set of results addresses multipreconditioning, a technique that allows users to exploit not just one, but several preconditioners. At each iteration, the space in which the solution is optimized grows much faster than with classical preconditioning. This can significantly accelerate convergence but multipreconditioned iterations are also more expensive. A significant contribution is adaptive multipreconditioning in which it is decided on the fly whether to apply a preconditioner or a multipreconditioner. This work is most advanced for symmetric positive definite problems but non-symmetric preconditioners and non-symmetric problems are also considered. Finally, the last set of results is for non-Hermitian problems solved by GMRES. Convergence of GMRES is still not fully predictable for general matrices despite it having been an active field of research for many years already. The contributions in this manuscript are new convergence bounds that make apparent the role played by the preconditioner, the deflation operator and the choice of a weighted inner product. An important finding is that, if the matrix is positive definite, it is a good strategy to precondition its Hermitian part with a Hermitian positive definite preconditioner. Convergence then depends on how well the Hermitian part is preconditioned and on how non-Hermitian the problem is. Finally, a new spectral deflation space is proposed to improve the term in the bound that depends on the non-Hermitianness of the problem. This has the effect of accelerating convergence in practice too.
  • Optimal business model adaptation plan for a company under a transition scenario
    • Ndiaye Elisa
    • Bezat Antoine
    • Gobet Emmanuel
    • Guivarch Céline
    • Jiao Ying
    , 2024. Climate stress-tests aim at projecting the financial impacts of climate change, covering both transition and physical risks under given macro scenarios. However, in practice, transition risk has been the main focus of supervisory and academic exercises, and existing tools to downscale these macroeconomic projections to the firm level remain limited. We develop a methodology to downscale sector-level trajectories into firm-level projections for credit risk stress-tests. The approach combines probabilistic modeling with stochastic control to capture firm-level uncertainty and optimal decision-making. It can be applied to any transition scenario or sector and highlights how firm-level characteristics such as initial intensity, abatement cost, and exposure to uncertainty shape heterogeneous firm-level responses to the transition. The model explicitly incorporates firm-level business uncertainty through stochastic dynamics on relative emissions and sales, which affect both optimal decisions and resulting financial projections. Firms’ rational behavior is modeled as a stochastic minimization problem, solved numerically through a method we call Backward Sampling. Illustrating our method with the NGFS transition scenarios and three types of companies (Green, Brown and Average), we show that firm-specific intensity reduction strategies yield significantly different financial outcomes compared to assuming uniform sectoral decarbonisation rates. Moreover, investing an amount equivalent to the total carbon tax paid at a given date is limited by its lack of a forward-looking feature, making it insufficient to buffer against future carbon shocks in a disorderly transition. This highlights the importance of firm-level granularity in climate risk assessments. By explicitly modeling firm heterogeneity and optimal decision-making under uncertainty, our methodology complements existing approaches to granular transition risk assessment and contributes to the ongoing development of scenario-based credit risk projections at the firm level.
  • Uncertainty quantification applied to fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    , 2025. In the core of nuclear reactors, fluid-structure interactions combined with intense irradiation lead to progressive deformations of fuel assemblies over successive operating cycles. When these deformations become significant, they can disrupt the positioning of the assemblies, damage spacer grids, or block and delay control rod insertion—compromising reactor safety and increasing operational costs. Understanding and controlling these deformations is therefore essential. However, their simulation remains uncertain due to the complexity of the involved multiphysics couplings: thermo-hydraulic and thermo-mechanical models are highly nonlinear, high-dimensional, and interact over several irradiation cycles, making uncertainty propagation difficult to assess and control. This thesis aims to develop a robust methodology for uncertainty quantification (UQ). The methodology must account for the coupled, nonlinear, and computationally expensive nature of the simulations while providing analysis tools to improve predictive reliability. To this end, we aim to address the following questions:(Q1) What is the magnitude of uncertainties in the results of coupled simulations?(Q2) What levers are available to reduce these uncertainties?(Q3) How does a design change affect simulation outcomes?The proposed approach begins with a separate analysis of the thermo-hydraulic and mechanical codes to identify their main sources of uncertainty using sensitivity analysis techniques. Surrogate models are then developed for each domain to reduce computational cost while maintaining the accuracy required for UQ. These surrogate models are finally coupled through an efficient strategy to simulate a full irradiation cycle and propagate uncertainties in a controlled manner. The methodology is structured around four main objectives:(O1) Identify and characterize sources of uncertainty throughout the simulation chain.(O2) Develop surrogate models adapted to uncertainty propagation and sensitivity analyses.(O3) Identify influential parameters and nonlinear interactions through global sensitivity analysis.(O4) Quantify and, where possible, reduce uncertainties to guide modeling or experimental efforts.In summary, this thesis proposes an uncertainty quantification approach tailored to complex multiphysics simulations, with the ultimate goal of improving the safety, reliability, and cost-efficiency of nuclear reactor operation.
  • Stability analysis of a new curl-based full field reconstruction method in 2D isotropic nearly-incompressible elasticity
    • Chibli Nagham
    • Genet Martin
    • Imperiale Sébastien
    , 2025. In time-harmonic elastography, the shear modulus is typically inferred from full field displacement data by solving an inverse problem based on the time-harmonic elastodynamic equation. In this paper, we focus on nearly incompressible media, which pose robustness challenges, especially in the presence of noisy data. Restricting ourselves to 2D and considering an isotropic, linearly deforming medium, we reformulate the problem as a non-autonomous hyperbolic system and, through theoretical analysis, establish existence, uniqueness, and stability of the inverse problem. To ensure robustness with noisy data, we propose a least-squares approach with regularization. The convergence properties of the method are verified numerically using in silico data.
  • Semi-discrete convergence analysis of a numerical method for waves in nearly-incompressible media with spectral finite elements
    • Ramiche Zineb
    • Imperiale Sébastien
    , 2025. In this work, we present a convergence analysis of a fully explicit high-order space discretisation approach for the computation of elastic field propagation in a nearly incompressible media. Our approach relies on the use of high-order continuous spectral finite elements with mass-lumping. We present an approach that is valid for full hexahedral and quadrilateral meshes, where the elastic field is sought in the space of Q_k continuous finite elements and the pressure in Q_k-2 discontinuous finite elements. Furthermore, we provide proof of the stability of the finite element discretization. This allows us to carry out error estimates for the semi-discrete problem in space, accounting in particular for quadrature errors.
  • Sensitivity Analysis of Emissions Markets: A Discrete-Time Radner Equilibrium Approach
    • Crépey Stéphane
    • Tadese Mekonnen
    • Vermandel Gauthier
    , 2025. Emissions markets play a crucial role in reducing pollution by encouraging firms to minimize costs. However, their structure heavily relies on the decisions of policymakers, on the future economic activities, and on the availability of abatement technologies. This study examines how changes in regulatory standards, firms' abatement costs, and emissions levels affect allowance prices and firms' efforts to reduce emissions. This is done in a Radner equilibrium framework encompassing inter-temporal decision-making, uncertainty, and a comprehensive assessment of the market dynamics and outcomes. The results of this research have the potential to assist policymakers in enhancing the structure and efficiency of emissions trading systems, through an in-depth comprehension of the reactions of market stakeholders towards different market situations.
  • Curvature penalization of strongly anisotropic interfaces models and their phase-field approximation
    • Babadjian Jean-François
    • Buet Blanche
    • Goldman Michael
    , 2025. This paper studies the effect of anisotropy on sharp or diffuse interfaces models. When the surface tension is a convex function of the normal to the interface, the anisotropy is said to be weak. This usually ensures the lower semicontinuity of the associated energy. If, however, the surface tension depends on the normal in a nonconvex way, this so-called strong anisotropy may lead to instabilities related to the lack of lower semicontinuity of the functional. We investigate the regularizing effects of adding a higher order term of Willmore type to the energy. We consider two types of problems. The first one is an anisotropic nonconvex generalization of the perimeter, and the second one is an anisotropic nonconvex Mumford-Shah functional. In both cases, lower semicontinuity properties of the energies with respect to a natural mode of convergence are established, as well as Γ-convergence type results by means of a phase field approximation. In comparison with related results for curvature dependent energies, one of the original aspects of our work is that, in the context of free discontinuity problems, we are able to consider singular structures such as crack-tips or multiple junctions.
  • Self-interacting approximation to McKean-Vlasov long-time limit: a Markov chain Monte Carlo method
    • Du Kai
    • Ren Zhenjie
    • Suciu Florin
    • Wang Songbo
    Journal de Mathématiques Pures et Appliquées, Elsevier, 2025, 205, pp.103782. For a certain class of McKean-Vlasov processes, we introduce proxy processes that substitute the mean-field interaction with self-interaction, employing a weighted occupation measure. Our study encompasses two key achievements. First, we demonstrate the ergodicity of the self-interacting dynamics, under broad conditions, by applying the reflection coupling method. Second, in scenarios where the drifts are negative intrinsic gradients of convex mean-field potential functionals, we use entropy and functional inequalities to demonstrate that the stationary measures of the self-interacting processes approximate the invariant measures of the corresponding McKean-Vlasov processes. As an application, we show how to learn the optimal weights of a two-layer neural network by training a single neuron. (10.1016/j.matpur.2025.103782)
    DOI : 10.1016/j.matpur.2025.103782
  • Stochastic and surrogate assisted multiobjective optimization with CMA-ES
    • Gharafi Mohamed
    , 2025. This thesis tackles expensive multiobjectiveoptimization by extending the COMO-CMA-ES algorithmwith surrogate models to reduce costly functionevaluations. Built on the SOFOMORE framework, whichdecomposes a multiobjective problem into singleobjectivesubproblems, in COMO-CMA-ES, each optimizedby a CMA-ES instance, the approach leveragessurrogate-assistance to accelerate convergence.Rather than modeling each objective separately, alinear-quadratic surrogate inspired by the lq-CMA-ESalgorithm is trained directly on the fitness functionbased on the Uncrowded Hypervolume Improvement(UHVI) indicator. This simplifies model management,maintains efficiency.Experimental results show up to a threefold reductionin the number of function evaluations for problemswith quadratic structure, while keeping similariteration counts. A surrogate-based parallelizationstrategy further reduces the number of required iterations,improving performance in computationally expensivecontexts.In this work we also propose a variant of CMA-ES assistedby surrogate models that preserves invarianceto monotone transformations of the objective function.This property enhances the algorithm’s robustness,making it more reliable across a wide range of problemformulations.Finally, the thesis investigates the scalability of theSOFOMORE framework and observes a quadratic decayin convergence rate as the desired Pareto front resolutionincreases. This phenomenon, explained throughan idealized simulation model, is intrinsic to UHVI andmore generally to hypervolume-based formulations.Overall, the work highlights both the potential andthe structural limits of surrogate-assisted multiobjectiveoptimization for expensive problems.
  • Stability of non-conservative cross diffusion model and approximation by stochastic particle systems
    • Bansaye Vincent
    • Bertolino Alexandre
    • Moussa Ayman
    , 2025. We study the stability of non-conservative deterministic cross diffusion models and prove that they are approximated by stochastic population models when the populations become locally large. In this model, the individuals of two species move, reproduce and die with rates sensitive to the local densities of the two species. Quantitative estimates are given and convergence is obtained soon as the population per site and the number of sites go to infinity. The proofs rely on the extension of stability estimates via duality approach under a smallness condition and the development of large deviation estimates for structured population models, which are of independent interest. The proofs also involve martingale estimates in H^{-1} and improve the approximation results in the conservative case as well.
  • Nonparametric hazard rate estimation with associated kernels and minimax bandwidth choice
    • Breuil Luce
    • Kaakai Sarah
    , 2025. <div><p>In this paper, we introduce a general theoretical framework for nonparametric hazard rate estimation using associated kernels, whose shapes depend on the point of estimation. Within this framework, we establish rigorous asymptotic results, including a second-order expansion of the MISE, and a central limit theorem for the proposed estimator. We also prove a new oracle-type inequality for both local and global minimax bandwidth selection, extending the Goldenshluger–Lepski method to the context of associated kernels. Our results propose a systematic way to construct and analyze new associated kernels. Finally, we show that the general framework applies to the Gamma kernel, and we provide several examples of applications on simulated data and experimental data for the study of aging. </p></div> (10.48550/arXiv.2509.24535)
    DOI : 10.48550/arXiv.2509.24535
  • Hybrid stochastic-structural modelling of particle-laden turbulent flows based on wavelet reconstruction
    • Letournel Roxane
    • Morhain Clément
    • Massot Marc
    • Vié Aymeric
    , 2025. Reduced-order modelling and simulation of turbulent particle-laden flows is required in numerous configurations, where the whole spectrum of turbulent scales through DNS is out of reach. Whereas structural or stochastic models have be derived in order to provide a synthetic turbulent model for the non-resolved scales of the gaseous flow field, reproducing preferential concentration is challenging because it requires capturing both spatial and temporal correlations. We present a novel reduced-order framework that overcomes this limitation by combining wavelet-based structural modelling with stochastic evolution. Using compactly supported divergence-free wavelets within a multiresolution analysis, the method provides direct control over spatial structures and correlations of synthetic multiscale velocity fields. In particular, a dedicated procedure enables to enforce a prescribed turbulent energy spectrum despite the nonlocal contribution in Fourier space of the wavelet basis functions. The stochastic evolution of wavelet coefficients further ensures consistent temporal correlations. The proposed framework is evaluated in homogeneous isotropic turbulence under a fully reduced setting, where all turbulent scales must be provided by the model. Results<p>show that it accurately reproduces preferential concentration across a wide range of Stokes numbers, achieving closer agreement with DNS data than classical Fourierbased kinematic simulations. This establishes a versatile and physically consistent turbulence model that combines structural fidelity with stochastic dynamics, offering a new tool to investigate particle–turbulence interactions.</p>
  • Maximal Entropy Random Walks in Z: Random and non-random environments
    • Duboux Thibaut
    • Gerin Lucas
    • Offret Yoann
    Journal of Statistical Physics, Springer Verlag, 2025, 192 (10), pp.140. The Maximal Entropy Random Walk (MERW) is a natural process on a finite graph, introduced a few years ago with motivations from theoretical physics. The construction of this process relies on Perron-Frobenius theory for adjacency matrices. Generalizing to infinite graphs is rather delicate, and in this article, we study in detail specific models of the MERW on Z with loops, for both random and non-random loops. Thanks to an explicit combinatorial representation of the corresponding Perron-Frobenius eigenvectors, we are able to precisely determine the asymptotic behavior of these walks. We show, in particular, that essentially all MERWs on Z with loops have positive speed. (10.1007/s10955-025-03516-8)
    DOI : 10.1007/s10955-025-03516-8
  • Improved Polynomial Bounds and Acceleration of GMRES by Solving a min-max Problem on Rectangles, and by Deflating
    • Spillane Nicole
    • Szyld Daniel B
    , 2025. Polynomial convergence bounds are considered for left, right, and split preconditioned GMRES. They include the cases of Weighted and Deflated GMRES for a linear system Ax = b. In particular, the case of positive definite A is considered. The well-known polynomial bounds are generalized to the cases considered, and then reduced to solving a min-max problem on rectangles on the complex plane. Several approaches are considered and compared. The new bounds can be improved by using specific deflation spaces and preconditioners. This in turn accelerates the convergence of GMRES. Numerical examples illustrate the results obtained.
  • Extended reference prior theory for objective and practical inference, application to robust and auditable seismic fragility curve estimation
    • van Biesbroeck Antoine
    , 2025. Reference prior theory provides a principled framework for objective Bayesian inference, aiming to minimize subjective input and allow data-based information to drive the estimates distribution. For this reason, the application of this theory to the estimation of seismic fragility curves is particularly relevant. Indeed, these curves are essential elements of seismic probabilistic risk assessment studies; they express the probability of failure of a mechanical structure as a function of indicators that define seismic scenarios. Since they inform critical decisions in infrastructure safety, a complete auditability of the pipeline that leads to the estimates of these curves is required.This thesis investigates the interplay between reference prior theory and seismic fragility curves estimation, yielding original contributions in these two domains. First, we complement the theoretical foundations of reference priors by developing novel constructions of them. Our goal is to support their objectivity while improving their practical applicability. Our results take the form of theoretical contributions in this domain that are based on a generalized definition of the mutual information. Our approaches tackle the principal issues of reference priors, namely their improper characteristic or that of their posterior, and their complex formulation for practical use.Second, we revisit the estimation of seismic fragility curves based on the prominent probit-lognormal model in a context where the data are particularly sparse. Our goal is to conduct a Bayesian estimation of seismic fragility curves that leverages the optimization of every sort of information, including the a priori one, in order to provide estimates that are robust and auditable. Our results highlight the limitations and irregularities of the model and propose methods that provide accurate and efficient estimates of the curves. The evaluations of our approaches are carried out on different case studies taken from the nuclear industry.This thesis builds a strong link between these two domains. The application to seismic fragility curves not only motivated theoretical developments but also directly benefited them, ultimately producing a more robust, interpretable, and verifiable estimation framework.
  • Improving the scalability of a high-order atmospheric dynamics solver based on the deal. II library
    • Orlando Giuseppe
    • Benacchio Tommaso
    • Bonaventura Luca
    , 2025, 267, pp.227-236. We present recent advances on the massively parallel performance of a numerical scheme for atmosphere dynamics applications based on the deal.II library. The implicit-explicit discontinuous finite element scheme is based on a matrix-free approach, meaning that no global sparse matrix is built and only the action of the linear operators on a vector is actually implemented. Following a profiling analysis, we focus on the performance optimization of the numerical method and describe the impact of different preconditioning and solving techniques in this framework. Moreover, we show how the use of the latest version of the deal.II library and of suitable execution flags can improve the parallel performance. (10.1016/j.procs.2025.08.249)
    DOI : 10.1016/j.procs.2025.08.249
  • A reproducible comparative study of categorical kernels for Gaussian process regression, with new clustering-based nested kernels
    • Carpintero Perez Raphaël
    • Da Veiga Sébastien
    • Garnier Josselin
    , 2025. Designing categorical kernels is a major challenge for Gaussian process regression with continuous and categorical inputs. Despite previous studies, it is difficult to identify a preferred method, either because the evaluation metrics, the optimization procedure, or the datasets change depending on the study. In particular, reproducible code is rarely available. The aim of this paper is to provide a reproducible comparative study of all existing categorical kernels on many of the test cases investigated so far. We also propose new evaluation metrics inspired by the optimization community, which provide quantitative rankings of the methods across several tasks. From our results on datasets which exhibit a group structure on the levels of categorical inputs, it appears that nested kernels methods clearly outperform all competitors. When the group structure is unknown or when there is no prior knowledge of such a structure, we propose a new clustering-based strategy using target encodings of categorical variables. We show that on a large panel of datasets, which do not necessarily have a known group structure, this estimation strategy still outperforms other approaches while maintaining low computational cost.
  • Differentiable Expectation-Maximisation and Applications to Gaussian Mixture Model Optimal Transport
    • Boïté Samuel
    • Tanguy Eloi
    • Delon Julie
    • Desolneux Agnès
    • Flamary Rémi
    , 2025. The Expectation-Maximisation (EM) algorithm is a central tool in statistics and machine learning, widely used for latent-variable models such as Gaussian Mixture Models (GMMs). Despite its ubiquity, EM is typically treated as a non-differentiable black box, preventing its integration into modern learning pipelines where end-to-end gradient propagation is essential. In this work, we present and compare several differentiation strategies for EM, from full automatic differentiation to approximate methods, assessing their accuracy and computational efficiency. As a key application, we leverage this differentiable EM in the computation of the Mixture Wasserstein distance $\mathrm{MW}_2$ between GMMs, allowing $\mathrm{MW}_2$ to be used as a differentiable loss in imaging and machine learning tasks. To complement our practical use of $\mathrm{MW}_2$, we contribute a novel stability result which provides theoretical justification for the use of $\mathrm{MW}_2$ with EM, and also introduce a novel unbalanced variant of $\mathrm{MW}_2$. Numerical experiments on barycentre computation, colour and style transfer, image generation, and texture synthesis illustrate the versatility of the proposed approach in different settings.