Sorry, you need to enable JavaScript to visit this website.
Share

Publications

Publications

CMAP Theses  are available by following this link:
Discover CMAP theses

Listed below, are sorted by year, the publications appearing in the HAL open archive.

2025

  • An optimal transport based embedding to quantify the distance between playing styles in collective sports
    • Baouan Ali
    • Rosenbaum Mathieu
    • Pulido Sergio
    Journal of Quantitative Analysis in Sports, De Gruyter, 2025. This study presents a quantitative framework to compare teams in collective sports with respect to their style of play. The style of play is characterized by the team's spatial distribution over a collection of frames. As a first step, we introduce an optimal transport-based embedding to map frames into Euclidean space, allowing for the efficient computation of a distance. Then, building on this frame-level analysis, we leverage quantization to establish a similarity metric between teams based on a collection of frames from their games. For illustration, we present an analysis of a collection of games from the 2021-2022 Ligue 1 season. We are able to retrieve relevant clusters of game situations and calculate the similarity matrix between teams in terms of style of play. Additionally, we demonstrate the strength of the embedding as a preprocessing tool for relevant prediction tasks. Likewise, we apply our framework to analyze the dynamics in the first half of the NBA season in 2015-2016. (10.1515/jqas-2025-0007)
    DOI : 10.1515/jqas-2025-0007
  • Federated Majorize-Minimization: Beyond Parameter Aggregation
    • Dieuleveut Aymeric
    • Fort Gersende
    • Hegazy Mahmoud
    • Wai Hoi-To
    , 2025. This paper proposes a unified approach for designing stochastic optimization algorithms that robustly scale to the federated learning setting. Our work studies a class of Majorize-Minimization (MM) problems, which possesses a linearly parameterized family of majorizing surrogate functions. This framework encompasses (proximal) gradient-based algorithms for (regularized) smooth objectives, the Expectation Maximization algorithm, and many problems seen as variational surrogate MM. We show that our framework motivates a unifying algorithm called Stochastic Approximation Stochastic Surrogate MM (SA-SSMM), which includes previous stochastic MM procedures as special instances. We then extend SA-SSMM to the federated setting, while taking into consideration common bottlenecks such as data heterogeneity, partial participation, and communication constraints; this yields FedMM. The originality of FedMM is to learn locally and then aggregate information characterizing the surrogate majorizing function, contrary to classical algorithms which learn and aggregate the original parameter. Finally, to showcase the flexibility of this methodology beyond our theoretical setting, we use it to design an algorithm for computing optimal transport maps in the federated setting.
  • Any nonincreasing convergence curves are simultaneously possible for GMRES and weighted GMRES, as well as for left and right preconditioned GMRES
    • Matalon Pierre
    • Spillane Nicole
    , 2025. The convergence of the GMRES linear solver is notoriously hard to predict. A particularly enlightening result by [Greenbaum, Pták, Strakoš, 1996] is that, given any convergence curve, one can build a linear system for which GMRES realizes that convergence curve. What is even more extraordinary is that the eigenvalues of the problem matrix can be chosen arbitrarily. We build upon this idea to derive novel results about weighted GMRES. We prove that for any linear system and any prescribed convergence curve, there exists a weight matrix M for which weighted GMRES (i.e. GMRES in the inner product induced by M ) realizes that convergence curve, and we characterize the form of M . Additionally, we exhibit a necessary and sufficient condition on M for the simultaneous prescription of two convergence curves, one realized by GMRES in the Euclidean inner product, and the other in the inner product induced by M . These results are then applied to infer some properties of preconditioned GMRES when the preconditioner is applied either on the left or on the right. For instance, we show that any two convergence curves are simultaneously possible for left and right preconditioned GMRES.
  • Infinite Dimensional Mean-Field Belavkin Equation: Well-posedness and Derivation
    • de Bouard Anne
    • Guo Gaoyue
    • Hérouard Théo
    , 2025. We analyze the mean-field limit of a stochastic Schrödinger equation arising in quantum optimal control and mean-field games, where N interacting particles undergo continuous indirect measurement. For the open quantum system described by Belavkin's filtering equation, we derive a mean-field approximation under minimal assumptions, extending prior results limited to bounded operators and finitedimensional settings. By establishing global well-posedness via fixed-point methods-avoiding measure-change techniques-we obtain higher regularity solutions. Furthermore, we prove rigorous convergence to the mean-field limit in an infinitedimensional framework. Our work provides the first derivation of such limits for wave functions in $L^2 (R^d)$, with implications for simulating and controlling large quantum systems.
  • Do you precondition on the left or on the right. A poster at DD29.
    • Spillane Nicole
    • Szyld Daniel B
    • Matalon Pierre
    , 2025. The idea behind preconditioning is to accelerate a linear solver by providing it with an approximate inverse of the problem matrix. There are two main ways to precondition the problem Ax = b. Letting H be the preconditioner: • either, HAx = Hb is solved (left preconditioning), • or, AHu = b is solved and the solution is x = Hu (right preconditioning). Split preconditioning is also an option. The goal of this poster is to present similarities and differences between left, right and split preconditioning. We also aim to start a conversation about whether there is a best choice and what your practices are when it comes to preconditioning.
  • Distilling Foundation Models for Robust and Efficient Models in Digital Pathology
    • Filiot Alexandre
    • Dop Nicolas
    • Tchita Oussama
    • Riou Auriane
    • Dubois Rémy
    • Peeters Thomas
    • Valter Daria
    • Scalbert Marin
    • Saillard Charlie
    • Robin Geneviève
    • Olivier Antoine
    , 2025, 15966, pp.162-172. In recent years, the advent of foundation models (FM) for digital pathology has relied heavily on scaling the pre-training datasets and the model size, yielding large and powerful models. While it resulted in improving the performance on diverse downstream tasks, it also introduced increased computational cost and inference time. In this work, we explore the distillation of a large foundation model into a smaller one, reducing the number of parameters by several orders of magnitude. Leveraging distillation techniques, our distilled model, H0-mini, achieves comparable performance to large FMs at a significantly reduced inference cost on HEST and EVA public benchmarks. Additionally, we conduct robustness analyses on the PLISM-WSI dataset and a multi-scanner, multi-staining private breast cancer cohort. We demonstrate that our distilled model reaches excellent robustness to variations in staining and scanning conditions, significantly outperforming other state-of-the-art models. This opens new perspectives to design lightweight and robust models for digital pathology, without compromising on performance. We publicly release H0-mini along with plismbench, the first robustness benchmark of pathology foundation models based on the PLISM dataset. (10.1007/978-3-032-04981-0_16)
    DOI : 10.1007/978-3-032-04981-0_16
  • Debiased Multifidelity Approach to Surrogate Modeling in Aerospace Applications
    • Gori Giulio
    • Le Maître Olivier P
    • Congedo Pietro Marco
    Journal of Aircraft, American Institute of Aeronautics and Astronautics, 2025, pp.1-14. We propose a multifidelity formulation for generating cokriging surrogates of complex physics models. First, we show that the standard autoregressive recursive approach may be subject to substantial limitations due to possible modeler’s biases/errors. These are inherent to the process of establishing a nested hierarchy concerning the alleged fidelity of the available models. The formulation we propose mitigates this issue. At each hierarchy level, the predictor consists of a linear combination of all previous levels instead of just the underlying one. The methodology implies a slightly higher training cost for the surrogate. However, the higher training cost is acceptable, considering the effort typically required to generate data in aerospace applications. A few artificial tests, including the optimization of a two-dimensional airfoil, illustrate strengths and weaknesses of the approach. (10.2514/1.C037765)
    DOI : 10.2514/1.C037765
  • Strategic geometric graphs through mean field games
    • Bertucci Charles
    • Rakotomalala Matthias
    SIAM Journal on Control and Optimization, Society for Industrial and Applied Mathematics, 2025, 63 (4), pp.2577-2604. We exploit the structure of geometric graphs on Riemannian manifolds to analyze strategic dynamic graphs at the limit, when the number of nodes tends to infinity. This framework allows to preserve intrinsic geometrical information about the limiting graph structure, such as the Ollivier curvature. After introducing the setting, we derive a mean field game system, which models a strategic equilibrium between the nodes. It has the usual structure with the distinction of being set on a manifold. Finally, we establish existence and uniqueness of solutions to the system when the Hamiltonian is quadratic for a class of non-necessarily compact Riemannian manifolds, referred to as manifolds of bounded geometry. (10.1137/24M1666471)
    DOI : 10.1137/24M1666471
  • GenEO spectral coarse spaces in SPD domain decomposition
    • Spillane Nicole
    Numerical Algorithms, Springer Verlag, 2025. Two-level domain decomposition methods are preconditioned Krylov solvers. What separates one- and two-level domain decomposition methods is the presence of a coarse space in the latter. The abstract Schwarz framework is a formalism that allows to define and study a large variety of two-level methods. The objective of this article is to define, in the abstract Schwarz framework, a family of coarse spaces called the GenEO coarse spaces (for Generalized Eigenvalues in the Overlaps). In detail, this work is a generalization of several methods, each of which exists for a particular choice of domain decomposition method. The article both unifies the GenEO theory and extends it to new settings. The proofs are based on an abstract Schwarz theory which now applies to coarse space corrections by projection, and has been extended to consider singular local solves. Bounds for the condition numbers of the preconditioned operators are proved that are independent of the parameters in the problem (e.g., any coefficients in an underlying PDE or the number of subdomains). The coarse spaces are computed by finding low- or high-frequency spaces of some well-chosen generalized eigenvalue problems in each subdomain. The abstract framework is illustrated by defining two-level Additive Schwarz, Neumann-Neumann and Inexact Schwarz preconditioners for a two-dimensional linear elasticity problem. Explicit theoretical bounds as well as numerical results are provided for this example. (10.1007/s11075-025-02166-x)
    DOI : 10.1007/s11075-025-02166-x
  • Benchmarking Powell's Legacy: Performance of Five Derivative-Free Solvers in pdfo on the bbob Test Suite
    • Brockhoff Dimo
    • Villain Tanguy
    , 2025, pp.1833-1841. The pdfo library by Tom M. Ragonneau and Zaikun Zhang makes the five derivative-free solvers BOBYQA, COBYLA, LINCOA, NEWUOA, and UOBYQA—originally written by Michael J. D. Powell—available in Python. In this paper, we are comparing their performance on the bbob test suite with three other solvers from the COCO data archive: CMA-ES from pycma, SLSQP and BFGS from scipy. We also compare the original solvers, written by Powell in Fortran 77, with the current pdfo versions, which saw multiple bug fixes and code improvements by Ragonneau and Zhang. For the latter comparison, we do not see large effects on performance between the Fortran 77 version and the current pdfo version. The only notable exception is the Bent Cigar function where we observe differences by a factor of 2–5 for BOBYQA, LINCOA, and NEWUOA. Compared to the other baseline algorithms, BOBYQA, LINCOA and NEWUOA perform very similarly over all bbob functions, being about a factor of 5 slower than SLSQP and BFGS while UOBYQA—as the best-performing pdfo solver—outperforms SLSQP and BFGS for larger budgets when compared over all 24 bbob functions. The linear surrogate of COBYLA, on the contrary, is clearly worse over all functions than the other algorithms. (10.1145/3712255.3734343)
    DOI : 10.1145/3712255.3734343
  • A Continuation Method Based on CMA-ES
    • Vu Hoang Nguyen
    • Brockhoff Dimo
    , 2025, pp.439 - 442. In this poster, we showcase a new algorithm for approximating the Pareto set of two-objective (unconstrained) optimization problems based on the idea of continuation. The algorithm tries to move "along" the Pareto set from one single-objective optimum to the other and back via a single-objective reformulation of the two-objective problem and the well-known CMA-ES as single-objective solver. The introduced algorithm BOG-CMA-ES (standing for bi-objective gradient based CMA-ES) is visually analyzed on simple convex-quadratic objective functions and extensively benchmarked on the bbob-biobj test suite of the COCO platform, including comparisons with current state-of-the-art algorithms. (10.1145/3712255.3726645)
    DOI : 10.1145/3712255.3726645
  • On the Robustness of BFGS to Positive and Negative Noise Outliers on the BBOB Test Suite
    • Chotard Alexandre
    • Auger Anne
    , 2025, pp.1850-1858. We investigate the impact of outlier noise on the performance of the scipy.optimize implementation of the quasi-Newton BFGS solver. Using the BBOB testbed corrupted with positive—making solution look worse than what they are—or negative—making solutions look better than what they are—outliers simulated with a Cauchy distribution with a probability p, we analyze how the performance is impacted. We show that the impact of positive or negative noise outliers is almost symmetric, that on simple problems BFGS has some robustness to noise, and that for ill-conditionned problems BFGS appears to fail when p or the dimension is too high. (10.1145/3712255.3734333)
    DOI : 10.1145/3712255.3734333
  • On the Robustness of Nelder-Mead to Positive and Negative Noise Outliers with Heavy-Tails on the BBOB Test Suite
    • Chotard Alexandre
    • Auger Anne
    , 2025, pp.1859-1866. We investigate the robustness to noise and outliers of the Nelder-Mead derivative-free optimization algorithm implemented in the scipy.optimize Python library. Using the noisifier module from the COCO platform, we investigate the impact of adding or subtracting a positive Cauchy noise to the function value with a certain probability p. This probability is varied from 0 till 0.4 allowing to appraise the impact of the noise. We find that Nelder-Mead has highly asymmetric performances with respect to adding a positive noise (i.e. degrading artificially possibly good solutions) or adding a negative noise (i.e. making possibly bad solutions appear good). When adding positive noise, Nelder-Mead shows some robustness in low dimensions (≤ 5). However adding negative noise has very detrimental consequences for the performances of Nelder-Mead, even with a low value of p or in low dimension. (10.1145/3712255.3734335)
    DOI : 10.1145/3712255.3734335
  • On the Pareto Set and Front of Multiobjective Spherical Functions with Convex Constraints
    • Auger Anne
    • Brockhoff Dimo
    • Cork Jordan
    • Tušar Tea
    , 2025, pp.527-535. We analyze a fundamental class of multiobjective constrained problems where the objectives are spherical functions and the constraints are convex. As an application from the projection theorem on closed convex sets, we prove that the constrained Pareto set corresponds to the orthogonal projection of the unconstrained Pareto set onto the feasible region. We establish this fundamental geometric property and illustrate its implications using visualizations of Pareto sets and fronts under various constraint configurations. Furthermore, we assess the performance of NSGA-II on these problems, examining its ability to approximate the constrained Pareto set across different dimensions. Our findings highlight the importance of theoretically grounded and understood benchmark problems for assessing algorithmic behavior and contribute to a deeper understanding of constrained multiobjective landscapes. (10.1145/3712256.3726432)
    DOI : 10.1145/3712256.3726432
  • Rank-based Linear-Quadratic Surrogate Assisted CMA-ES
    • Gharafi Mohamed
    • Hansen Nikolaus
    • Le Riche Rodolphe
    • Brockhoff Dimo
    , 2025. In this poster, we introduce a rank-based surrogate-assisted variant of CMA-ES. Unlike previous methods that employ rank information as constraints to train an SVM classifier, our approach employs a linear-quadratic regression on the ranks. We investigate the method's invariance empirically. While this first algorithm outperforms CMA-ES with a few exceptions, it falls short to entirely meet the lq-CMA-ES performance levels. To address this, we propose an enhanced variant that handles together two alternative surrogates, one based on the ranks and one based on the original function values. Although this variant sacrifices strict invariance, it gains in robustness and achieves performance comparable to, or even exceeding, lq-CMA-ES on transformed problems. This last algorithm shows how simply incorporating new transformations of rank values could improve any surrogate-based CMA-ES variant.
  • How Robust is UOBYQA to Worsening, Frozen Noise? Investigations on the bbob Test Suite With Outliers
    • Brockhoff Dimo
    • Villain Tanguy
    , 2025, pp.1842 - 1849. UOBYQA, short for Unconstrained Optimization By Quadratic Approximation, is one of the well-known solvers derived and implemented by Michael J. D. Powell. In each step, the algorithm builds a quadratic surrogate of the objective function, interpolating quadratically many points for which the true function values are known. The model is optimized within the so-called trust region and the resulting solution is evaluated next. Adaptation of the trust region radius allows for fast convergence on a wide range of (noiseless) functions without the need for derivatives. In this workshop paper, we investigate the effect of (frozen) nonnegative, i.e., worsening noise on UOBYQA with varying probability of solutions being affected by the noise. To this end, we use the COCO platform and its newest addition, the noiser, applied to the classical bbob functions. The numerical benchmarking experiments showcase that UOBYQA is negatively affected by the noise, but surprisingly little over a wide range of noise strengths for some of the bbob functions. (10.1145/3712255.3734354)
    DOI : 10.1145/3712255.3734354
  • Benchmarking CMA-ES under Additive and Subtractive Noise on the BBOB Testbed
    • Girardin Oskar
    , 2025, pp.1867-1874. We benchmark a non-elitist CMA-ES algorithm on the BBOB testbed with additive and subtractive noise. In particular, we consider the case where re-evaluated solutions produce the same observed function value. As a comparison, we benchmark a version of CMA-ES with resampling, which aims at reducing the effective noise level. We find CMA-ES to be more sensitive to subtractive noise than to additive noise in dimensions 2, 3, 5, 10, 20 and 40. Resampling for CMA-ES appears to be detrimental for low noise levels, while it is beneficial for high noise levels. (10.1145/3712255.3734332)
    DOI : 10.1145/3712255.3734332
  • Classification-Based Linear Surrogate Modeling of Constraints for AL-CMA-ES
    • Girardin Oskar
    • Hansen Nikolaus
    • Brockhoff Dimo
    • Auger Anne
    , 2025, pp.728-736. We introduce linear surrogate functions for modeling inequality constraints to solve constrained blackbox optimization problems with the Augmented Lagrangian CMA-ES. Each surrogate is constructed from a binary classifier that predicts the sign of the constraint value. The classifier, and consequently the resulting algorithm, is invariant under sign preserving transformations of the constraint values and can handle binary, flat, and deceptive constraints. Somewhat surprisingly, we find that adopting a sign-based classification model of the constraints allows to solve classes of constrained problems which can not be solved with the original Augmented Lagrangian method using the true constraint value. (10.1145/3712256.3726435)
    DOI : 10.1145/3712256.3726435
  • Prediction-Aware Learning in Multi-Agent Systems
    • Capitaine Aymeric
    • Boursier Etienne
    • Moulines Eric
    • Jordan Michael I.
    • Durmus Alain
    , 2025, PMLR 267. The framework of uncoupled online learning in multiplayer games has made significant progress in recent years. In particular, the development of time-varying games has considerably expanded its modeling capabilities. However, current regret bounds quickly become vacuous when the game undergoes significant variations over time, even when these variations are easy to predict. Intuitively, the ability of players to forecast future payoffs should lead to tighter guarantees, yet existing approaches fail to incorporate this aspect. This work aims to fill this gap by introducing a novel prediction-aware framework for time-varying games, where agents can forecast future payoffs and adapt their strategies accordingly. In this framework, payoffs depend on an underlying state of nature that agents predict in an online manner. To leverage these predictions, we propose the POWMU algorithm, a contextual extension of the optimistic Multiplicative Weight Update algorithm, for which we establish theoretical guarantees on social welfare and convergence to equilibrium. Our results demonstrate that, under bounded prediction errors, the proposed framework achieves performance comparable to the static setting. Finally, we empirically demonstrate the effectiveness of POWMU in a traffic routing experiment.
  • Revisiting Non-Acyclic GFlowNets in Discrete Environments
    • Morozov Nikita
    • Maksimov Ian
    • Tiapkin Daniil
    • Samsonov Sergey
    , 2025. Generative Flow Networks (GFlowNets) are a family of generative models that learn to sample objects from a given probability distribution, potentially known up to a normalizing constant. Instead of working in the object space, GFlowNets proceed by sampling trajectories in an appropriately constructed directed acyclic graph environment, greatly relying on the acyclicity of the graph. In our paper, we revisit the theory that relaxes the acyclicity assumption and present a simpler theoretical framework for non-acyclic GFlowNets in discrete environments. Moreover, we provide various novel theoretical insights related to training with fixed backward policies, the nature of flow functions, and connections between entropy-regularized RL and non-acyclic GFlowNets, which naturally generalize the respective concepts and theoretical results from the acyclic setting. In addition, we experimentally re-examine the concept of loss stability in non-acyclic GFlowNet training, as well as validate our own theoretical findings. (10.48550/arXiv.2502.07735)
    DOI : 10.48550/arXiv.2502.07735
  • Discrete Markov Probabilistic Models
    • Pham Le-Tuyet-Nhi
    • Shariatian Dario
    • Ocello Antonio
    • Conforti Giovanni
    • Durmus Alain
    , 2025. This paper introduces the Discrete Markov Probabilistic Model (DMPM), a novel algorithm for discrete data generation. The algorithm operates in the space of bits {0, 1} d , where the noising process is a continuous-time Markov chain that can be sampled exactly via a Poissonian clock that flips labels uniformly at random. The time-reversal process, like the forward noise process, is a jump process, with its intensity governed by a discrete analogue of the classical score function. Crucially, this intensity is proven to be the conditional expectation of a function of the forward process, strengthening its theoretical alignment with score-based generative models while ensuring robustness and efficiency. We further establish convergence bounds for the algorithm under minimal assumptions and demonstrate its effectiveness through experiments on low-dimensional Bernoulli-distributed datasets and high-dimensional binary MNIST data. The results highlight its strong performance in generating discrete structures. This work bridges theoretical foundations and practical applications, advancing the development of effective and theoretically grounded discrete generative modeling.
  • On Teacher Hacking in Language Model Distillation
    • Tiapkin Daniil
    • Calandriello Daniele
    • Ferret Johan
    • Perrin Sarah
    • Vieillard Nino
    • Ramé Alexandre
    • Blondel Mathieu
    , 2025. Post-training of language models (LMs) increasingly relies on the following two stages: (i) knowledge distillation, where the LM is trained to imitate a larger teacher LM, and (ii) reinforcement learning from human feedback (RLHF), where the LM is aligned by optimizing a reward model. In the second RLHF stage, a well-known challenge is reward hacking, where the LM over-optimizes the reward model. Such phenomenon is in line with Goodhart's law and can lead to degraded performance on the true objective. In this paper, we investigate whether a similar phenomenon, that we call teacher hacking, can occur during knowledge distillation. This could arise because the teacher LM is itself an imperfect approximation of the true distribution. To study this, we propose a controlled experimental setup involving: (i) an oracle LM representing the ground-truth distribution, (ii) a teacher LM distilled from the oracle, and (iii) a student LM distilled from the teacher. Our experiments reveal the following insights. When using a fixed offline dataset for distillation, teacher hacking occurs; moreover, we can detect it by observing when the optimization process deviates from polynomial convergence laws. In contrast, employing online data generation techniques effectively mitigates teacher hacking. More precisely, we identify data diversity as the key factor in preventing hacking. Overall, our findings provide a deeper understanding of the benefits and limitations of distillation for building robust and efficient LMs.
  • Finite-Sample Convergence Bounds for Trust Region Policy Optimization in Mean-Field Games
    • Ocello Antonio
    • Tiapkin Daniil
    • Mancini Lorenzo
    • Laurière Mathieu
    • Moulines Eric
    , 2025. We introduce Mean-Field Trust Region Policy Optimization (MF-TRPO), a novel algorithm designed to compute approximate Nash equilibria for ergodic Mean-Field Games (MFG) in finite state-action spaces. Building on the well-established performance of TRPO in the reinforcement learning (RL) setting, we extend its methodology to the MFG framework, leveraging its stability and robustness in policy optimization. Under standard assumptions in the MFG literature, we provide a rigorous analysis of MF-TRPO, establishing theoretical guarantees on its convergence. Our results cover both the exact formulation of the algorithm and its sample-based counterpart, where we derive high-probability guarantees and finite sample complexity. This work advances MFG optimization by bridging RL techniques with mean-field decision-making, offering a theoretically grounded approach to solving complex multi-agent problems. (10.48550/arXiv.2505.22781)
    DOI : 10.48550/arXiv.2505.22781
  • Unified Breakdown Analysis for Byzantine Robust Gossip
    • Gaucher Renaud
    • Dieuleveut Aymeric
    • Hendrikx Hadrien
    , 2025, 267, pp.18868-18896. In decentralized machine learning, different devices communicate in a peer-to-peer manner to collaboratively learn from each other's data. Such approaches are vulnerable to misbehaving (or Byzantine) devices. We introduce F-RG, a general framework for building robust decentralized algorithms with guarantees arising from robust-sum-like aggregation rules F. We then investigate the notion of breakdown point, and show an upper bound on the number of adversaries that decentralized algorithms can tolerate. We introduce a practical robust aggregation rule, coined CSours, such that CSours-RG has a near-optimal breakdown. Other choices of aggregation rules lead to existing algorithms such as ClippedGossip or NNA. We give experimental evidence to validate the effectiveness of CSours-RG and highlight the gap with NNA, in particular against a novel attack tailored to decentralized communications.
  • High Performance Parallel Solvers for the time-harmonic Maxwell Equations
    • Fressart Elise
    • Dubois Sébastien
    • Gouarin Loïc
    • Massot Marc
    • Nowak Michel
    • Spillane Nicole
    , 2025. We consider the numerical solution of large scale time-harmonic Maxwell equations. To this day, this problem remains difficult, in particular because the equations are neither Hermitian nor semi-definite. Our approach is to compare different strategies for solving this set of equations with preconditioners that are available either in PETSc, MUMPS, or in hypre. Four different preconditioners are considered. The first is the sparse approximate inverse, which is often applied to electromagnetic problems. The second is Restricted Additive Schwarz, a domain decomposition preconditioner. The third is the Hiptmair-Xu preconditioner which is tailored to the positive Maxwell equations, a nearby problem. The final preconditioner is MUMPS's Block Low-Rank method, a compressed block procedure. We also compare the performance of this method to the standard LU factorization technique, which is a direct solver. Performance with respect to the mesh size, the number of CPU cores, the wavelength and the physical size of the domain are considered. This work in progress yields temporary conclusions in favour of the Hiptmair-Xu and the Block Low-Rank preconditioners.