About the project

Publications:

1.     Authors: Giuseppe Puglisi, Davide Poletti, Giulio Fabbian, Carlo Baccigalupi, Luca Heltai, Radek Stompor

Title:  Iterative map-making with two-level preconditioning for polarized Cosmic Microwave Background data sets

Abstract: An estimation of the sky signal from streams of Time Ordered Data (TOD) acquired by Cosmic Microwave Background (CMB) experiments is one of the most important steps in the context of CMB data analysis referred to as the map-making problem. The continuously growing CMB data sets render the CMB map-making problem more challenging in terms of computational cost and memory in particular in the context of ground based experiments. In this context, we study a novel class of the Preconditioned Conjugate Gradient (PCG) solvers which invoke two-level preconditioners. We compare them against PCG solvers commonly used in the map-making context considering their precision and time-to-solution. We compare these new methods on realistic, simulated data sets reflecting the characteristics of current and forthcoming CMB ground-based experiment. We develop an embarrassingly parallel implementation of the approach where each processor performs a sequential map-making for a subset of the TOD. We find that considering the map level residuals the new class of solvers permits achieving tolerance of up to 3 orders of magnitude better than the standard approach, where the residual level often saturates before convergence is reached. This corresponds to an important improvement in the precision of recovered power spectra in particular on the largest angular scales. The new method also typically requires fewer iterations to reach a required precision and thus shorter runtimes for a single map-making solution. However, the construction of an appropriate two-level preconditioner can be as costly as a single standard map-making run. Nevertheless, if the same problem needs to be solved multiple times, e.g., as in Monte Carlo simulations, this cost has to be incurred only once, and the method should be competitive not only as far as its precision but also its performance is concerned.

Available from arXiv. See also a blog post by Davide.

Published as: Astronomy & Astrophysics, Volume 618, id.A62, 14 pp. (open access)


2.    Authors: Jan Papež, Laura Grigori, Radek Stompor

Title: Solving linear equations with messenger-field and conjugate gradients techniques – an application to CMB data analysis

Abstract: We discuss linear system solvers invoking a messenger-field and compare them with (preconditioned) conjugate gradients approaches. We show that the messenger-field techniques correspond to fixed point iterations of an appropriately preconditioned initial system of linear equations. We then argue that a conjugate gradient solver applied to the same preconditioned system, or equivalently a preconditioned conjugate gradient solver using the same preconditioner and applied to the original system, will in general ensure at least a comparable and typically better performance in terms of the number of iterations to convergence and time-to-solution. We illustrate our conclusions on two common examples drawn from the Cosmic Microwave Background data analysis: Wiener filtering and map-making. In addition, and contrary to the standard lore in the CMB field, we show that the performance of the preconditioned conjugate gradient solver can depend importantly on the starting vector. This observation seems of particular importance in the cases of map-making of high signal-to-noise sky maps and therefore should be of relevance for the next generation of CMB experiments.

Available from arXiv. See also a blog post by Jan.

Published as: Astronomy & Astrophysics, Volume 620, id.A59, 14 pp. (open access)


3. Authors: Jan Papež, Laura Grigori, Radek Stompor

Title: Accelerating linear system solvers for time domain component separation of cosmic microwave background data.

Abstract: Component separation is one of the key stages of any modern, cosmic microwave background (CMB) data analysis pipeline. It is an inherently non-linear procedure and typically involves a series of sequential solutions of linear systems with similar, albeit not identical system matrices, derived for different data models of the same data set. Sequences of this kind arise for instance in the maximization of the data likelihood with respect to foreground parameters or sampling of their posterior distribution. However, they are also common in many other contexts. In this work we consider solving the component separation problem directly in the measurement (time) domain, which can have a number of important advantageous over the more standard pixel-based methods, in particular if non-negligible time-domain noise correlations are present as it is commonly the case. The time-domain based approach implies, however, significant computational effort due to the need to manipulate the full volume of time-domain data set. To address this challenge, we propose and study efficient solvers adapted to solving time-domain-based, component separation systems and their sequences and which are capable of capitalizing on information derived from the previous solutions. This is achieved either via adapting the initial guess of the subsequent system or through a so-called subspace recycling, which allows to construct progressively more efficient, two-level preconditioners. We report an overall speed-up over solving the systems independently of a factor of nearly 7, or 5, in the worked examples inspired respectively by the likelihood maximization and likelihood sampling procedures we consider in this work.

Available from ArXiv. See also a blog post by Jan.

Published as: Astronomy & Astrophysics, Volume 638, id.A73, 16 pp, (2020)


4. Authors: Clara Vergès, Josquin Errard, Radek Stompor

Title:  Framework for analysis of next generation, polarized CMB data sets in the presence of Galactic foregrounds and systematic effects

Abstract: Reaching the sufficient sensitivity to detect primordial B-modes requires modern CMB polarization experiments to rely on new technologies, necessary for the deployment of arrays of thousands of detectors with a broad frequency coverage and operating them for extended periods of time. This increased complexity of experimental design unavoidably introduces new instrumental and systematic effects, which in turn may impact performance of the new instruments. In this work we extend the standard data analysis pipeline by including a (parametric) model of instrumental effects directly in the data model. We then correct for them in the analysis, accounting for the additional uncertainty in the final results. We embed these techniques within a general, end-to-end formalism for estimating the impact of the instrument and foreground models on constraints on the amplitude of the primordial B-mode signal. We focus on the parametric component separation approach which we generalize to allow for simultaneous estimation of instrumental and foreground parameters. We demonstrate the framework by studying in detail the effects induced by an achromatic half-wave plate (HWP), which lead to a frequency-dependent variation of the instrument polarization angle, and experimental bandpasses which define observational frequency bands. We assume a typical Stage-3 CMB polarization experiment, and show that maps recovered from raw data collected at each frequency band will unavoidably be linear mixtures of the Q and U Stokes parameters. We then derive a new generalized data model appropriate for such cases, and extend the component separation approach to account for it. We find that some of the instrumental parameters, in particular those describing the HWP, can be successfully constrained by the data themselves without need for external information, while others, like bandpasses, need to be known with good precision in advance. We also show that if the assumed model is an accurate description of the HWP and we have sufficient information about bandpasses, our approach can successfully correct for these effects. In particular, we recover the tensor-to-scalar ratio r down to the precision levels typical of Stage-3 experiments, which aim at setting a (1 σ ) upper limit on r of ∼10-3 if r =0 .

Reference: Physical Review D, Volume 103, Issue 6, article id.063507 (2021)

Available from the journal and ArXiv. 


5. Authors: Nicolas Chopin, Gabriel Ducrocq

Title: Fast compression of MCMC output.

Abstract: We propose cube thinning, a novel method for compressing the output of a MCMC (Markov chain Monte Carlo) algorithm when control variates are available. It amounts to resampling the initial MCMC sample (according to weights derived from control variates), while imposing equality constraints on averages of these control variates, using the cube method of \cite{Deville2004}. Its main advantage is that its CPU cost is linear in N, the original sample size, and is constant in M, the required size for the compressed sample. This compares favourably to Stein thinning \citep{Riabiz2020}, which has complexity O(NM2), and which requires the availability of the gradient of the target log-density (which automatically implies the availability of control variates). Our numerical experiments suggest that cube thinning is also competitive in terms of statistical error.

Available from arXiv.


6. Authors: G. Ducrocq, N. Chopin, J. Errard, R. Stompor

Title:  Amended Gibbs samplers for Cosmic Microwave Background power spectrum estimation

Abstract: We study different variants of the Gibbs sampler algorithm from the perspective of their applicability to the estimation of power spectra of the cosmic microwave background (CMB) anisotropies. We focus on approaches which aim at reducing the cost of a computationally heavy constrained realization step and capitalize on the interweaving strategy to ensure that our algorithms mix well for high and low signal-to-noise ratio components. In particular, we propose an approach, which we refer to as Centered overrelax, which avoids the constraint realization step completely at the cost of additional, auxiliary variables and the need for overrelaxation. We demonstrate these variants and compare their merits on full and cut sky simulations, quantifying their performance in terms of an effective sample size (ESS) per second. We find that on nearly full-sky, satellite-like data, the proposed Gibbs sampler with overrelaxation performs between one and two orders of magnitude better than the usual Gibbs sampler potentially providing an interesting alternative to the currently favored approaches.

Available from arXiv.