Partager

Publications

Publications

Les thèses soutenues au CMAP sont disponibles en suivant ce lien:
Découvrez les thèses du CMAP

Sont listées ci-dessous, par année, les publications figurant dans l'archive ouverte HAL.

2025

  • Optimal observer theory in manifolds – from formulations to applications
    • Le Ruz Gaël
    , 2025. Data assimilation is the process of optimally combining dynamical models with observational data in order to estimate the evolving state and parameters of a physical system. In the deterministic optimal observer framework, this is achieved by defining an observer: a modified version of the original dynamics, augmented with a gain term acting on the discrepancy between the observations and the model output. The optimal observer is then designed to minimize a criterion that penalizes deviations from the model dynamics and discrepancies with the observations. The aim of this thesis is to develop an optimal observer theory from a deterministic perspective, in the context of constrained physical systems. More specifically, we focus on cases where the constrained space admits a formulation as a Riemannian manifold. In the first part, we derive an exact formulation of the deterministic optimal observer on compact manifolds, extending the Mortensen observer originally defined in the Euclidean setting. Since such a formulation inherits the high computational cost and the curse of dimensionality of its Euclidean counterpart, we then develop approximation strategies that preserve the theoretical foundations of the optimal observer in Riemannian manifolds while reducing its computational burden. In particular, we propose and theoretically justify extensions of the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) to the manifold setting, by approximating the value functions of the Mortensen observer with quadratic functions. We also develop an approximation of the fully space and time discretized Mortensen observer using dimension-reduction techniques, and in particular low-rank matrix representations of the associated expansion coefficients. For each proposed observer, we provide numerical illustrations on submanifolds of $\R^3$ and Cartesian products thereof. Finally, we assess the established framework in a real-world application: the monitoring of wildland fire propagation. By endowing the abstract space of shapes with a Riemannian structure, we are able to design an observer for this front-propagation problem using observational data of shape type. This work thus bridges rigorous theoretical advances in observer design with practical strategies for data assimilation in a complex, constrained physical systems.
  • Do you precondition on the left or on the right?
    • Spillane Nicole
    • Matalon Pierre
    • Szyld Daniel B
    , 2025. This work is a follow-up to a poster that was presented at the DD29 conference. Participants were asked the question: “Do you precondition on the left or on the right?”. Here we report on the results of this social experiment. We also provide context on left, right and split preconditioning, share our literature review on the topic, and analyze some of the finer points. Two examples illustrate that convergence bounds can sometimes lead to misleading conclusions.
  • Signature approach for pricing and hedging path-dependent options with frictions
    • Abi Jaber Eduardo
    • Hainaut Donatien
    • Motte Edouard
    , 2025. We introduce a novel signature approach for pricing and hedging path-dependent options with instantaneous and permanent market impact under a mean-quadratic variation criterion. Leveraging the expressive power of signatures, we recast an inherently nonlinear and non-Markovian stochastic control problem into a tractable form, yielding hedging strategies in (possibly infinite) linear feedback form in the time-augmented signature of the control variables, with coefficients characterized by non-standard infinite-dimensional Riccati equations on the extended tensor algebra. Numerical experiments demonstrate the effectiveness of these signature-based strategies for pricing and hedging general path-dependent payoffs in the presence of frictions. In particular, market impact naturally smooths optimal trading strategies, making low-truncated signature approximations highly accurate and robust in frictional markets, contrary to the frictionless case.
  • PSDNORM: TEST-TIME TEMPORAL NORMALIZATION FOR DEEP LEARNING IN SLEEP STAGING
    • Gnassounou Theo
    • Collas Antoine
    • Flamary Rémi
    • Gramfort Alexandre
    , 2025. <div><p>Distribution shift poses a significant challenge in machine learning, particularly in biomedical applications using data collected across different subjects, institutions, and recording devices, such as sleep data. While existing normalization layers, BatchNorm, LayerNorm and InstanceNorm, help mitigate distribution shifts, when applied over the time dimension they ignore the dependencies and auto-correlation inherent to the vector coefficients they normalize. In this paper, we propose PSDNorm that leverages Monge mapping and temporal context to normalize feature maps in deep learning models for signals. Notably, the proposed method operates as a test-time domain adaptation technique, addressing distribution shifts without additional training. Evaluations with architectures based on U-Net or transformer backbones trained on 10K subjects across 10 datasets, show that PSDNorm achieves state-of-the-art performance on unseen left-out datasets while being 4-times more data-efficient than BatchNorm.</p></div> (10.48550/arXiv.2503.04582)
    DOI : 10.48550/arXiv.2503.04582
  • Multi-source and test-time domain adaptation on multivariate signals using Spatio-Temporal Monge Alignment
    • Gnassounou Theo
    • Collas Antoine
    • Flamary Rémi
    • Lounici Karim
    • Gramfort Alexandre
    , 2024. Machine learning applications on signals such as computer vision or biomedical data often face significant challenges due to the variability that exists across hardware devices or session recordings. This variability poses a Domain Adaptation (DA) problem, as training and testing data distributions often differ. In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities. This Optimal Transport (OT) based method adapts the cross-power spectrum density (cross-PSD) of multivariate signals by mapping them to the Wasserstein barycenter of source domains (multi-source DA). Predictions for new domains can be done with a filtering without the need for retraining a model with source data (test-time DA). We also study and discuss two special cases of the method, Temporal Monge Alignment (TMA) and Spatial Monge Alignment (SMA). Non-asymptotic concentration bounds are derived for the mappings estimation, which reveals a bias-plus-variance error structure with a variance decay rate of $\mathcal{O}(n_\ell^{-1/2})$ with $n_\ell$ the signal length. This theoretical guarantee demonstrates the efficiency of the proposed computational schema. Numerical experiments on multivariate biosignals and image data show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings. Notably, STMA is a pre-processing step complementary to state-of-the-art deep learning methods. (10.48550/arXiv.2407.14303)
    DOI : 10.48550/arXiv.2407.14303
  • Certificates of nonnegativity of multivariate polynomials with zero-dimensional gradient ideal and infimum attained
    • Bender Matías
    • Tsigaridas Elias
    • Zhu Chaoping
    , 2025. We study the problem of certifying nonnegativity of multivariate polynomials with rational coefficients, under the assumptions that the polynomial attains its infimum and its gradient ideal is zero-dimensional. We introduce a new certificate, based on Rational Univariate Representations (RUR), that is both perfectly complete, it certifies every nonnegative polynomial, and perfectly sound, it correctly identifies any polynomial that is not nonnegative. The certificate reduces the problem to checking whether a univariate polynomial admits a weighted sum-of-squares decomposition modulo an ideal defined by the RUR, thereby avoiding the radicality assumption that limits other existing approaches. We present a Monte Carlo algorithm, \sosrur, that produces such a certificate or returns a rational witness point where the polynomial takes a negative value. For a polynomial in $n$ variables of degree~$d$ and maximum coefficient bitsize~$\tau$, \sosrur runs in bit complexity $\widetilde{\mathcal{O}}_B(e^{\omega n + \mathcal{O}(\lg n)} d^{(\omega+2)n}(d+\tau))$, and the polynomials in the certificate have coefficients of bitsize at most $\widetilde{\mathcal{O}}(d^{2n+1}(d+n\tau))$, when we ignore polylogarithmic factors. If the polynomial can take negative values, then the algorithm outputs a rational witness point with coordinates of bitsize $\widetilde{\mathcal{O}}(nd^{2n+2}(d+n\tau))$. In the special case where the gradient ideal is radical, in addition to the improvement in the bit size, our certificate specializes in a representation with improved degree bounds, better by an additive factor of $d$, and lower bit complexity compared to previous results, together with an explicit validation of the RUR, which had previously been overlooked. We also consider sparse certificates, that their structure, complexity, and (bitsize) bounds depend on the Newton polytope~$Q$ of the input polynomial and not on the total degree $d$. To compute them, we rely on a Monte Carlo algorithm to obtain a sparse RUR for unmixed systems, which is of independent interest. In particular, we compute the sparse certificate in {$\widetilde{\mathcal{O}}_B( n^{6n} \, \vol(Q)^6 \, \tau)$} bit operations, where $\vol(Q)$ is the volume of $Q$. The certificate involves sparse polynomials of degree at most $2^{n+2} \, n!^2 \, \vol(Q)^2$ and coefficient bitsize {$\widetilde{\mathcal{O}}(n^{4n} \, \vol(Q)^4 \, \tau)$}. These bounds provide sharper guaranties in the presence of sparsity, as they are input sensitive. This result contributes to the ongoing effort to encode polynomial optimization problems through compact descriptions, in our case by exploiting the sparsity of the input, thereby extending the current frontier of tractability from a computational point of view. Overall, our results extend the scope of certificates modulo the gradient ideal by eliminating the radicality requirement, introduce a variant that exploits the sparsity of the input polynomial, improve existing bitsize complexity bounds by at least a factor of~$d^n$, and provide efficient, verifiable algebraic certificates of nonnegativity for polynomial optimization.
  • Optimal exit from Uniswap v3 and best expected return for a liquidity provider
    • Agarwal Ankush
    • Gobet Emmanuel
    , 2025. We analyze the profitability of liquidity providers’ (LPs) positions in Uniswap v3 by aggregating fee income and impermanent loss within an optimal stopping framework. Our first result shows that the liquidity burn should be optimized over one range at a time, rather than simultaneously. Second, without discounting future fees, there is no finite optimal liquidity burn time and indefinite liquidity provision is optimal. In this case, we derive closed-form expressions for the value of LP positions according to different price levels of liquidity burn. Third, with a discount factor, we introduce an equivalent rate of return and demonstrate that under a Black-Scholes model with volatility σ, the optimal return is approximately 0.425·σ2 (i.e. about 10% return for 50% volatility), and it is achieved by choosing the at-the-money range of liquidity. These results provide explicit formulas and strategic insights for LPs in Uniswap v3, and complement recent works on Uniswap v2 and fee modelling by highlighting the distinct impact of concentrated liquidity.
  • AdaCap: An Adaptive Contrastive Approach for Small-Data Neural Networks
    • Belucci Bruno
    • Lounici Karim
    • Meziani Katia
    , 2025. Neural networks struggle on small tabular datasets, where tree-based models remain dominant. We introduce Adaptive Contrastive Approach (AdaCap ), a training scheme that combines a permutationbased contrastive loss with a Tikhonov-based closed-form output mapping. Across 85 real-world regression datasets and multiple architectures, AdaCap yields consistent and statistically significant improvements in the small-sample regime, particularly for residual models. A meta-predictor trained on dataset characteristics (size, skewness, noise) accurately anticipates when AdaCap is beneficial. These results show that AdaCap acts as a targeted regularization mechanism, strengthening neural networks precisely where they are most fragile. All results and code are publicly available at https://github.com/BrunoBelucci/adacap.
  • Gaussian process regression for non-Euclidean inputs : graphs and categorical variables
    • Carpintero Perez Raphaël
    , 2025. In the design phases of aircraft engine parts, numerical simulation is traditionally used to identify the right characteristics (geometry, materials, etc.) to meet desired specifications. Meta-models like Gaussian processes can achieve this objective at lower cost by learning from physics-based simulations, with the added benefit of uncertainty quantification. The goal of this thesis is to improve the predictivity of Gaussian process meta-models for non-Euclidean spaces. We propose the Sliced Wasserstein Weisfeiler-Lehman (SWWL) graph kernel combining continuous Weisfeiler-Lehman iterations and an optimal transport for inputs that are graphs with continuous node attributes. This new kernel is used to learn scalar outputs with Gaussian processes when the inputs are finite element meshes. In the context of numerical simulations, it is also customary to work with output fields defined on meshes, which are challenging to handle due to changes between input geometries in terms of both size and adjacency structure. We develop a new methodology to address such limitations by relying on the combination of regularized optimal transport, dimension reduction techniques, and the use of Gaussian processes indexed by graphs. We illustrate the efficiency of the method to solve real problems in fluid dynamics and solid mechanics with uncertainty quantification. A last axis focuses on Gaussian process regression with categorical input variables. We carry out a reproducible comparative study of existing categorical kernels with novel evaluation metrics. We also introduce a new clustering-based strategy using target encodings of categorical variables in order to identify groups of levels when they are unknown.
  • Mathematical modeling of fluids from the kinetic theory
    • Giovangigli Vincent
    , 2025. A synthesis of mathematical fluid models derived from the kinetic theory of gases is presented. Kinetic theory yields well structured set of partial equations with entropies compatible with convection, diffusion and sources. It also yields properly structured nonlinear source terms as well as the transport coefficients. We first address multicomponent flow models including detailed transport and chemical reactions. Symmetrization, hyperbolic-parabolic structure and asymptotic stability of constant equilibrium states are established. We next investigate relaxation issues for a two temperature model that leads to symmetrized hyperbolic-parabolic systems with stiff source terms. Local existence of solutions is established and the apparition of the bulk viscosity coefficient is justified in the fast relaxation limit. We finally consider nonideal fluids and their cohesive properties that lead to diffuse interfaces models with capillary effects. Using an augmented formulation-with density gradient added as an extra unknown-a new type of hyperbolic-parabolic-dispersive structure is obtained with antisymmetric second order coupling terms arising from capillarity. Local existence of solutions is established as well as asymptotic stability of constant states.
  • Asymptotic Stability of Equilibrium States for Cohesive Fluids
    • Giovangigli Vincent
    , 2025. <div><p>We investigate asymptotic stability of constant equilibrium states for compressible nonisothermal cohesive fluids also termed capillary fluids or diffuse interface fluids. The density gradient is added as an extra variable and the augmented system of equations is recast into a normal form with symmetric transport first order terms, symmetric dissipative second order terms and antisymmetric cohesive second order terms. Global existence and asymptotic stability of constant equilibrium states are established by using new dissipative conditions for such augmented hyperbolic-parabolic-dispersive systems of equations. Decay estimates are obtained in all spatial dimensions by using the augmented formulation as well as estimates in Fourier spaces.</p></div>
  • From Kinetic Theory to Hyperbolic-Parabolic Fluid Systems of Equations
    • Giovangigli Vincent
    , 2025. Fluid systems of equations can be derived from the kinetic theory by using the Chapman-Enskog asymptotic method. We investigate the links between the hyperbolic-parabolic structure of such systems and the kinetic framework. We also discuss the mathematical structure of linear systems associated with transport property evaluation.
  • Inf-sup stable non-conforming finite elements on tetrahedra with second and third order accuracy
    • Balazi Loïc
    • Allaire Grégoire
    • Jolivet Pierre
    • Omnes Pascal
    , 2025. We introduce a family of scalar non-conforming finite elements with second and third order accuracy with respect to the $H^1$-norm on tetrahedra. Their vector-valued versions generate, together with discontinuous pressure approximations of order one and two respectively, inf-sup stable finite element pairs with convergence order two and three for the Stokes problem in energy norm.
  • High-order multistep coupling: convergence, stability and PDE application
    • Simon Antoine
    • François Laurent
    • Massot Marc
    Comptes Rendus. Mécanique, Académie des sciences (Paris), 2025, 353 (G1), pp.1159-1184. Designing coupling schemes for specialized advanced mono-physics solvers in order to conduct accurate and efficient multiphysics simulations is a key issue that has recently received a lot of attention. A novel high-order adaptive multistep coupling strategy has shown potential to improve the efficiency and accuracy of such simulations, but requires further analysis. The purpose of the present contribution is to conduct the numerical analysis of convergence of the explicit and implicit variants of the method and to provide a first analysis of its absolute stability. A simplified coupled problem is constructed to assess the stability of the method along the lines of the Dahlquist’s test equation for ODEs. We propose a connection with the stability analysis of other methods such as splitting and ImEx schemes. A stability analysis on a representative conjugate heat transfer case is also presented. This work constitutes a first building block to an a priori analysis of the stability of coupled PDEs. (10.5802/crmeca.333)
    DOI : 10.5802/crmeca.333
  • A sequential Bayesian approach to Gaussian process quantile regression for optimization
    • Nicolas Hugo
    • Le Maître Olivier
    , 2025, pp.115. Quantile regression extends the classical least-squares regression to the estimation of conditional quantiles of a response variable. In the frequentist approach, the quantile regression problem is cast as the minimization of a loss function, possibly complemented with regularization terms. The Bayesian counterpart formulates the problem as posterior inference over a function space. Of particular interest is Gaussian process quantile regression, which formulates the regression problem as posterior inference over the latent conditional quantiles, where prior knowledge is encoded in the form of a Gaussian process. Existing approaches to Gaussian process quantile regression either perform the regression directly on observed data [1] or resort to sparse approximations to mitigate computational costs [2]. However, in the latter case, the approximation is typically defined over a small set of latent auxiliary variables that act as compact representations of the quantile, but whose locations are fixed in advance, ultimately resulting in suboptimal predictive performance. In this work, we introduce an adaptive strategy that exploits the Gaussian process predictive variance to infill the set of auxiliary variable locations. Inference of the posterior distribution over these auxiliary variables is recast as its Laplace approximation. The impact of finite training data on the auxiliary variables is estimated through bootstrap resampling. Building on this, we introduce an active learning strategy that acquires new observations of the response variable via rejection sampling, with the sampling density guided by the uncertainties in the auxiliary estimates. Our algorithm combines adaptive auxiliary variable allocation and active learning, leading to a sequential approach that ensures a rich and well-balanced representation of the quantile function. Finally, we extend our quantile regression method and its enrichment criteria to the quantile minimization problem within a Bayesian optimization framework.
  • Efficient Simulation of Hawkes Processes using their Affine Volterra Structure
    • Abi Jaber Eduardo
    • Attal Elie
    • Sotnikov Dimitri
    , 2025. We introduce a novel and efficient simulation scheme for Hawkes processes on a fixed time grid, leveraging their affine Volterra structure. The key idea is to first simulate the integrated intensity and the counting process using Inverse Gaussian and Poisson distributions, from which the jump times can then be easily recovered. Unlike conventional exact algorithms based on sampling jump times first, which have random computational complexity and can be prohibitive in the presence of high activity or singular kernels, our scheme has deterministic complexity which enables efficient large-scale Monte Carlo simulations and facilitates vectorization. Our method applies to any nonnegative, locally integrable kernel, including singular and non-monotone ones. By reformulating the scheme as a stochastic Volterra equation with a measure-valued kernel, we establish weak convergence to the target Hawkes process in the Skorokhod J1topology. Numerical experiments confirm substantial computational gains while preserving high accuracy across a wide range of kernels, with remarkably improved performance for a variant of our scheme based on the resolvent of the kernel.
  • Optimal filtering in closed manifolds- a deterministic perspective
    • Le Ruz Gaël
    • Moireau Philippe
    , 2025. In this paper, we adopt an optimal control viewpoint to formulate a rigorous deterministic filtering theory when the dynamics and the observations are defined on manifolds. Therefore, our result extends the Mortensen observer to closed manifolds, namely a compact manifold without boundary, in both continuous and discrete time, where the second ultimately yields a convergent time discretization of the first. The resulting observer requires the computation of the viscosity solution of a Hamilton-Jacobi-Bellman equation on the state manifold, which we illustrate on the sphere.
  • HTGAN: Heavy-Tail GAN for Multivariate Dependent Extremes via Latent-Dimensional Control
    • Girard Stéphane
    • Gobet Emmanuel
    • Pachebat Jean
    International Journal of Computer Mathematics, Taylor & Francis, 2025, pp.1-41. Dealing with extreme values is a major challenge in probabilistic modeling, of great importance in various application domains such as economics, engineering and life sciences. In the context of Generative Modeling, it is known that models based on transformations of light-tailed distribution, such as Generative Adversarial Networks (GANs), fail to capture the behaviour in the tails. In particular, these models are not able to capture the dependence in extreme regions. We study a modified version of the GAN algorithm, where the input is a heavy-tailed distribution (and we call it HTGAN). Recalling the stable tail dependence function (stdf), a tool from extreme-value theory that measures the dependence structure in extreme regions, we provide a bound on the approximation of the stdf of the target with the output of a HTGAN.This bound scales as $N^{{-1}/{(d -1)}}$, where $N$ is the dimension of the input noise of the network and $d$ is the dimension of the data of interest. This suggests increasing the dimension of the latent noise to gain precision in the estimation of dependence. We perform experiments, comparing HTGAN with a classical light-tailed GAN (LTGAN) on both synthetic and real datasets exhibiting heavy-tailed characteristics. These experiments confirm our theoretical findings: First, the HTGAN algorithm is better at reproducing dependence in extremes than LTGAN. Second, we show that the quality of approximation gets better as the dimension of the latent noise increases. (10.1080/00207160.2025.2578391)
    DOI : 10.1080/00207160.2025.2578391
  • On inverse source problem for piecewise constant case
    • Novikov Roman
    • Sharma Basant Lal
    , 2025. We consider inverse source problem at fixed positive frequency. We prove a global uniqueness theorem for this problem for the case of piecewise constant source.
  • Hybrid FEM/IPDG semi-implicit schemes for time domain electromagnetic wave propagation in non cylindrical coaxial cables
    • Beni Hamad Akram
    • Imperiale Sébastien
    • Joly Patrick
    , 2025. In this work, we develop an efficient numerical method for solving 3D Maxwell's equations in non-cylindrical coaxial cables. The main challenge arises from the elongated geometry of the computational domain, which induces strong anisotropy between the longitudinal direction (along the cable) and the transverse directions (within the cross-sections). This leads to the use of highly anisotropic meshes, where the longitudinal mesh size is much larger than the transverse one.<p>Our objective is to design a numerical scheme that is explicit in the longitudinal direction, with a CFL stability condition depending only on the longitudinal mesh size. In a previous work, we achieved this for cylindrical cables by employing prismatic edge elements, 1D quadrature for longitudinal mass lumping, and a hybrid explicit/implicit time discretization. The present paper extends this approach to non-cylindrical cables, addressing several new difficulties with the following key ingredients: (1) representing the cable as a deformation of a reference cylindrical cable and employing mapping techniques between the physical and reference domains; (2) using an anisotropic space discretization that combines an interior penalty discontinuous Galerkin (IPDG) method in the transverse directions with a conforming finite element method in the longitudinal direction; (3) utilizing prismatic edge elements on a prismatic mesh of the reference cable; and (4) adapting the construction of the hybrid explicit-implicit time discretization to the new structure of the semidiscrete problem. From a theoretical perspective, the main difficulty lies in the stability analysis, which requires extending and adapting standard techniques for DG methods in space and energy methods in time.</p>
  • Multi-source domain adaptation for learning on biosignals
    • Gnassounou Theo
    , 2025. The success of modern machine learning models often relies on large-scale labeled datasets. However, in specialized fields like neuroscience, data is frequently scarce due to privacy concerns, leading to low variability in the dataset of training. This results in a distribution shift where models trained on a source dataset perform poorly on a different target dataset. Domain Adaptation (DA) aims to solve this by adapting models to new target domains with different data distributions without access to target labels. Among the various DA techniques, Optimal Transport (OT) has emerged as a powerful and principled tool for aligning distributions between domains. This thesis specifically addresses the challenges of applying DA to Electroencephalography (EEG) data. EEG signals exhibit high variability across subjects and sessions, a significant source of distribution shift, which is further compounded by limited data availability. Despite its potential, the broader application of DA is hampered by a lack of accessible software and reproducible benchmarks with realistic validation protocols. Moreover, existing DA methods are often tailored for computer vision, making them less effective for other domains like EEG analysis. This thesis addresses these challenges by improving the accessibility and reproducibility of DA methods. We introduce novel techniques for aligning EEG signals and propose a robust normalization layer for deep learning models. This thesis is organized as follows: Chapter 1 provides an overview of domain adaptation, reviewing distribution shifts and the methods designed to mitigate them, including modern deep learning approaches. It then details Optimal Transport (OT) theory and its applications in aligning distributions with introduction of specific case when data are supposed gaussian. Finally, it connects these concepts to the analysis of brain signals (EEG), where high variability presents a significant distribution shift challenge. Chapter 2 introduces Skada, a Python toolbox for simplifying the implementation and evaluation of domain adaptation methods. It also presents SKADA-bench, a comprehensive benchmark using a rigorous cross-validation protocol to assess DA methods across various data modalities. The results highlight the challenges of hyperparameter tuning and the effectiveness of parameter-free methods. Chapter 3 proposes Spatio-Temporal Monge Alignment (STMA), a novel method for multi-source and test-time adaptation in EEG data. STMA aligns the Power Spectral Density (PSD) of signals to a common barycenter, effectively normalizing the data. Extensive experiments on BCI and sleep staging datasets demonstrate that STMA significantly improves model performance and generalization. Chapter 4 presents PSDNorm, a test-time temporal normalization method for deep learning in sleep staging. PSDNorm is integrated as a normalization layer within a neural network to align the PSD during feature extraction. Validated on 10 diverse datasets comprising more than 10.000 subjects, PSDNorm is shown to significantly outperform traditional normalization techniques. Finally, Chapter 5 summarizes the contributions of this thesis and outlines future research directions, including the evolution of the Skada library, expanding domain adaptation benchmarks, and generalizing the Monge Alignment framework to other data modalities.
  • Computer-assisted methods for diffusion models in population dynamics : cross-diffusion, non-local reactions, and stability
    • Payan Maxime
    , 2025. The analysis of nonlinear cross-diffusion systems is a major challenge for understanding complex physical or biological phenomena. Computer-assisted methods provide powerful tools to deliver precise and rigorous answers to theoretical questions such as the existence and stability of steady states, or the search for eigenvalues in concrete situations.This thesis proposes the application and adaptation of these methods in the context of cross-diffusion and nonlocal or nonhomogeneous reaction-diffusion models. A first study concerns a chemotaxis model incorporating a nonlinearity in the diffusion term. We develop a specific tool to handle this nonlinearity, allowing us to establish the existence of multiple steady states using a fixed point theorem whose assumptions are verified by computer.A second study focuses on a nonlocal reaction-diffusion model, where we study the stability of a steady state as a function of the intensity of nonlocality. This study is a first step towards the analysis of stability for cross-diffusion systems. We propose a computer-assisted approach for the study of the spectrum of a compact resolvent operator, using a Gershgorin theorem for infinite matrices. We then analyze the dependence of the first eigenvalue with respect to the nonlocality parameter in order to determine the existence and uniqueness of a stability threshold.Finally, a last study on nonlinear models, which include cross-diffusion models, allows us to propose a general methodology to study the existence and stability of steady states. This methodology is inspired by an analogy with the Lyapunov stability theorem for systems of ordinary differential equations. We build a positive definite self-adjoint operator, whose properties are verified by a computer, thus obtaining a quadratic Lyapunov functional.
  • Sensitivity analysis of a flow redistribution model for a multidimensional and multifidelity simulation of fuel assembly bow in a pressurized water reactor
    • Abboud Ali
    • de Lambert Stanislas
    • Garnier Josselin
    • Leturcq Bertrand
    • Lamorte Nicolas
    Nuclear Engineering and Design, Elsevier, 2025, 443, pp.114259. (10.1016/j.nucengdes.2025.114259)
    DOI : 10.1016/j.nucengdes.2025.114259
  • Finite elements approximation of the boundary value problems of geodesics
    • Le Ruz Gaël
    • Lombardi Damiano
    , 2025. In this contribution we investigate the numerical approximation of the boundary value problems of geodesics, i.e. the log operation, on Riemannian manifolds. In particular, by leveraging the variational formulation of the geodesics problem, we propose a finite elements discretisation in time. We also investigate the numerical approximation of the Hessian of the squared distance with a sensitivity analysis on the first problem.
  • Error-Based mesh selection for efficient numerical simulations with variable parameters
    • Dornier Hugo
    • Le Maître Olivier P
    • Congedo Pietro Marco
    • Salah El Din Itham
    • Marty Julien
    • Bourasseau Sébastien
    , 2025. Advanced numerical simulations often depend on mesh refinement techniques to manage discretization errors in complex models and reduce computational costs. This work concentrates on Adaptive Mesh Refinement (AMR) for steady-state solutions, which uses error estimators to iteratively refine the mesh locally and gradually tailor it to the solution. AMR requires evaluating the solution across a series of meshes. When solving the model for multiple operating conditions, such as in uncertainty quantification studies, full systematic adaptation can cause significant computational overhead. To mitigate this, the Error-based Mesh Selection (EMS) method is introduced to decrease the cost of adaptation. For each operating condition, EMS seeks to choose, from a library of pre-adapted meshes, the one that minimizes the discretization error. A key feature of this approach is the use of Gaussian Process models to predict the solution errors for each mesh in the library. These error models are built solely from the library's meshes and their solutions, using restriction errors as proxies for discretization errors, thereby avoiding additional model evaluations. The EMS method is tested on an analytical shock problem and a supersonic scramjet configuration, showing near-optimal mesh selection. The influence of library size on the resulting error level is also examined.