Inferring three-nucleon couplings from multi-messenger neutron-star observations - Nature Communications


Inferring three-nucleon couplings from multi-messenger neutron-star observations - Nature Communications

Here, we develop a framework that allows us to constrain LECs directly from observations of neutron stars via Bayesian inference and, thus, explore EFT-based interactions for the densest neutron-rich systems in the cosmos. We consider a Hamiltonian at next-to-next-to-leading order (N2LO) in the EFT expansion and focus on the leading (N2LO) 3N forces that provide a strong contribution to the EOS of neutron matter. As LEC values obtained from two entirely distinct sources (neutron star observations versus atomic nuclei) can be checked against each other for consistency, our framework will provide a unique test for the domain of applicability of nuclear interactions and of the convergence of the EFT expansion in dense matter.

Our Bayesian inference setup to constrain c and c from astrophysical data involves sampling over the microphysical LECs using a Markov chain Monte Carlo (MCMC) stochastic sampling algorithm. Every iteration of the sampler requires the solution of the many-body Schrödinger equation, which yields the neutron star EOS. The EOS is subsequently translated to astrophysical observables, such as radii or tidal deformabilities, by solving the Tolman-Oppenheimer-Volkoff (TOV) equations and the equations for a stationary quadrupolar tidal deformation. Given the large number of iterations required to sample the posterior distribution function -- -- this is a computationally intractable problem given that a single iteration requires CPU-h.

We overcome this challenge by employing recent advances in machine-learning-based algorithms that act as surrogate models to more complex high-fidelity calculations. First, for rapid calculation of the EOS, we employ the recently proposed parametric matrix model (PMM), which is trained on third-order many-body perturbation theory (MBPT) calculations of the neutron-matter EOS. In the left panel of Fig. 1, we show results for 70 validation samples for a PMM trained on 30 high-fidelity MBPT calculations. We find emulator uncertainties to be well under control, with predictions of the EOS at nuclear saturation density n = 0.16 fm differing from the high-fidelity results by 0.04% on average. This emulator uncertainty is much smaller than the LEC variation, indicated by the spread of samples, as well as the uncertainty in the MBPT calculations themselves (see Methods). We then extend these neutron-matter results to neutron star matter in beta equilibrium. We model the neutron star EOS using three different parameterizations (see Methods), the first and simplest of which only uses the emulator results based on c and c to characterize the EOS up to 10n.

The nucleonic description of dense matter eventually breaks down, and exotic phenomena such as QCD phase transitions in high-density matter might appear. Therefore, in the following, we assume the validity of the EFT expansion up to 2n, and hence, the LECs c and c determine the EOS only up to this density. An important aspect of our framework is to find a suitable parameterization and marginalization over uncertainties in the high-density EOS. Here, we use two different models based on the speed of sound that allow for a physics-agnostic extension of the EOS above 2n. These models employ either 5 or 7 parameters, including the two LECs. Comparing these models with the 2-parameter model illustrates the importance of the marginalization over the high-density EOS.

For a given EOS model, we use an ensemble of neural networks to predict the tidal deformability of neutron stars. Results obtained from validating our emulator on ~60,000 samples are shown in the right panel of Fig. 1 for the 5-parameter model, but the emulator performance is similar for the other EOS models. We find that both the PMM and neural-network emulators provide highly accurate and rapid emulation of the underlying complex calculations that are required to compute neutron star properties starting from microscopic LECs. The use of machine-learning algorithms in this manner is a key part of our analysis that allows us to sample complex posterior distribution functions in our astrophysical Bayesian analysis framework.

We condition the LECs c and c on the LIGO/Virgo Collaboration's first GW observation of a binary neutron star merger, GW170817, as well as X-ray observations of three pulsars made by NASA's Neutron Star Interior Composition Explorer (NICER) mission: PSR J0030+0451, PSR J0740+6620, and PSR J0437-4715. For these observations, we employ likelihood functions of the source parameters (neutron star masses, binary tidal deformabilities, radii, etc.) that are proportional to the posteriors computed in Abbott et al. for GW170817, and Riley et al., Salmi et al., and Choudhury et al. for the three NICER observations. With wide uniform priors between zero and twice the mean of their laboratory values on the LECs c and c, the posterior conditioned on GW170817 is sampled using MCMC, and the posterior samples are weighted according to the product of the NICER likelihoods. Each likelihood evaluation is carried out by evaluating our two emulators consecutively, thus converting microscopic LECs to macroscopic neutron star observables.

The marginalized two-dimensional posterior distribution on c and c is shown in Fig. 2. We find essentially no influence of the neutron star observations on the parameter c since the effect of c on the EOS of neutron matter is subdominant compared to that of c. On the other hand, given the GW and X-ray observations, we see a clear preference for less repulsive 3N forces. We attribute this to c being negatively correlated with the pressure inside neutron stars. Since GW170817, as well as the X-ray observation of PSR J0470 favor more compact neutron stars, this translates to smaller pressures in neutron star interiors. The upper bound of the posterior is set by the prior boundary at c = 0 GeV; however, this boundary is physically motivated since positive values of c correspond to an attractive 3N force, which would lead to the collapse of neutron matter. Although the posterior median of c = -2.52 GeV deviates from the laboratory value of c = -3.61 ±0.05 GeV, the laboratory and astrophysical determinations are consistent at the 90% confidence level, albeit with the latter constraint currently having large uncertainties. Existing neutron star observations do not offer high-precision constraints on the LECs, given the significant statistical uncertainties present in these observations.

In the next decade, two upcoming next-generation ground-based GW detectors are expected to begin operations, namely the Einstein Telescope (ET) in Europe and Cosmic Explorer (CE) in the United States. These detectors will provide a sensitivity that is improved by an order of magnitude over current GW detectors. Consequently, they are expected to observe events with signal-to-noise (SNR) ratios above 100 per year. Here, we demonstrate how such observations, at the level of populations of events, can provide stringent constraints on 3N couplings.

We simulate a population of neutron star merger events that can potentially be observed within a year-long observing run by a network of three next-generation detectors. Our detector network consists of two CEs, each with a 40 km arm length, and one ET with its fiducial 10 km triangular design. For our population model, we assume a uniform distribution in the range 1-2 M for the component masses and a random pairing into binary systems. Furthermore, we assume a constant local merger rate of 170 Gpc yr, which is consistent with the merger rate inferred in Abbott et al., and a uniform distribution of sources in co-moving volume. This results in a total of approximately 400 events within a redshift z of 0.2 observed by the detector network within one year. A lower merger rate would only affect our results by increasing the time required to observe the same number of events. There are many additional events with z > 0.2 that are detected with lower SNR; however, here we focus on the loudest signals in the detected population. The underlying EOS explored by our population, i.e., the EOS that relates the component masses of our simulated binaries to the corresponding component tidal deformabilities, is generated from our 5-parameter model with c = -3.68 GeV, which is in agreement with the laboratory value at the 90% confidence level.

From the detected events in our simulated population, we select the N = 20 highest-SNR events to analyze. This most informative subset is large enough that its chirp masses span the entire range allowed by the population model, with at least one event in every chirp mass bin of width 0.1M. This choice amounts to a selection cut at an SNR of about 225. Because the SNR depends primarily on the chirp mass, whose distribution is independent of the EOS in our assumed population model, this selection cut does not bias the recovery of EOS parameters. GW selection effects can thus be neglected in our inference.

The evolution of the uncertainties in c as a function of the number N of observed events, represented at the 90% confidence level, is shown in Fig. 3. The N events are analyzed using a Bayesian hierarchical approach for N = 1, 3, 5, 10, 15, and 20, within the Fisher matrix approximation. For N = 20, the analysis is repeated using parameter estimation on zero-noise injections. The zero-noise realization of Gaussian noise is, in the statistical sense, the most likely realization and, furthermore, the effect of any non-zero Gaussian noise generally weakens drastically with the SNR. The result matches well with the corresponding Fisher matrix result (see Fig. 3). This is consistent with other findings in the literature, see, for example, Vallisneri, which demonstrate that, for a four-dimensional Fisher matrix analysis, an SNR above 20 is typically sufficient for robust parameter estimation. We find that a single event among the 20 loudest ones observed by next-generation GW detectors in a year decreases uncertainties by only, on average, a factor of two compared with uncertainties obtained from present astrophysical data. However, as the number of detections increases, we find that the statistical uncertainties in c decrease approximately as , and thus converge remarkably well to the injected value. We see how inference performed at the level of populations of events can potentially provide high-precision constraints on nuclear interactions within a year, competitive with and complementary to terrestrial laboratory data. These constraints would improve further if events with lower SNR are also considered. In contrast, the constraints could weaken in the same time frame if, for example, the observed merger rate is lower than expected.

Figure 3 also demonstrates the importance of the marginalization over uncertainties in the high-density EOS, implemented in our framework in the 5- and 7-parameter models. We find excellent agreement between the results obtained from these two models. However, the very simple 2-parameter model -- which does not account for such uncertainties -- is in significant tension with the injected value, as it converges toward an incorrect c. This underscores the importance of allowing for general high-density extensions in order to avoid systematic uncertainties in the inference of LECs.

Previous articleNext article

POPULAR CATEGORY

misc

18066

entertainment

19119

corporate

15887

research

9814

wellness

15821

athletics

20176