3  More Efficient Variants of PWS

The contents of this chapter have been published in Phys. Rev. X 13, 041017 (2023)  [1].

In the previous chapter, we introduced Path Weight Sampling (PWS), a computational approach capable of providing exact information rate estimates for any stochastic system. However, the direct implementation of PWS becomes inefficient for complex systems and long trajectories due to the high dimensionality of trajectory space. To overcome these limitations, we present two improved PWS variants in this chapter, inspired by free-energy estimation techniques from statistical physics. First, Rosenbluth-Rosenbluth PWS (RR-PWS) leverages computational strategies developed for polymer chemical potential calculations, enhancing efficiency for sampling in trajectory spaces. Second, Thermodynamic Integration PWS (TI-PWS) applies thermodynamic integration combined with trajectory space MCMC sampling, inspired by transition path sampling. We benchmark these methods using a simple coupled birth-death model, comparing the effectiveness of all three PWS variants against analytical results and the Gaussian approximation.

3.1 Introduction

To quantify information transmission, be it in a natural or engineered information processing system, we need to be able to compute the mutual information between input and output trajectories, from which the information transmission rate can be obtained. However, because of the high dimensionality of the trajectory space, computing the mutual information between trajectories is exceedingly difficult, if not impossible, because the conventional non-parametric binning approach to estimate the required trajectory probability distributions cannot be used. Indeed, except for very simple models, the mutual information between trajectories is typically computed using approximations that are often uncontrolled. In the previous chapter, we introduced a computational scheme called Path Weight Sampling (PWS), which, for the first time, makes it possible to compute the information rate exactly, for any stochastic system.

Yet, the scheme presented in that chapter, Direct PWS, becomes inefficient for more complex systems and longer trajectories. The reason is that the number of possible trajectories increases exponentially with trajectory length, leading to a corresponding increase in the variance of the estimate. Hence, for long trajectories the PWS estimate may prove to be computationally infeasible. To address this issue, we describe two improved variants of PWS in this section, both based on free-energy estimators from statistical physics.

Specifically, in Section 3.3 we present Rosenbluth-Rosenbluth PWS (RR-PWS) which exploits the observation that the computation of ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] is analogous to the calculation of the (excess) chemical potential of a polymer, for which efficient methods have been developed  [2โ€“4]. In Section 3.4, we present Thermodynamic Integration PWS (TI-PWS) which is based on the classic free energy estimation technique of thermodynamic integration  [5โ€“7] in conjunction with a trajectory space MCMC sampler using ideas from transition path sampling  [8].

In Section 3.5 we apply PWS to a well-known model system. It consists a simple pair of coupled birth-death processes which allows us to test the efficiency of the three PWS variants, as well as to compare the PWS results with analytical results from the Gaussian approximation  [9] and the technique by Duso and Zechner  [10].

3.2 Marginalizing in Trajectory Space

PWS evaluates the mutual information I(๐’ฎ,๐’ณ)I(\mathcal{S},\mathcal{X}) from the marginal entropy H(๐’ณ)H(\mathcal{X}) and the conditional entropy H(๐’ณ|๐’ฎ)H(\mathcal{X}|\mathcal{S}), see Equation 2.2. Of these two entropies, the conditional one can be efficiently computed using the scheme described in the previous chapter, and as used in DPWS. However, obtaining the marginal entropy H(๐’ณ)=โˆ’โˆซ๐’Ÿ[๐’™]๐’ซ[๐’™]ln๐’ซ[๐’™]H(\mathcal{X}) = - \int \mathcal{D}[\mathbfit{x}]\ \mathcal{P}[\mathbfit{x}] \ln\mathcal{P}[\mathbfit{x}] is much more challenging. Indeed, the computationally most expensive part of Direct PWS is the evaluation of the marginalization integral ๐’ซ[๐’™i]=โˆซ๐’Ÿ[๐’”]๐’ซ[๐’”,๐’™i]\mathcal{P}[\mathbfit{x}_i]=\int\mathcal{D}[\mathbfit{s}] \mathcal{P}[\mathbfit{s},\mathbfit{x}_i] which needs to be performed for every sample ๐’™1,โ€ฆ,๐’™N\mathbfit{x}_1,\ldots,\mathbfit{x}_N. Consequently, the computational efficiency of this marginalization is essential for the overall performance.

Marginalization is a general term to denote an operation where one or more variables are integrated out of a joint probability distribution. For instance, we obtain the marginal probability distribution ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] from ๐’ซ[๐’”,๐’™]\mathcal{P}[\mathbfit{s},\mathbfit{x}] by computing the integral

๐’ซ[๐’™]=โˆซ๐’Ÿ[๐’”]๐’ซ[๐’”,๐’™]=โˆซ๐’Ÿ[๐’”]๐’ซ[๐’”]๐’ซ[๐’™|๐’”].(3.1)\mathcal{P}[\mathbfit{x}] = \int\mathcal{D}[\mathbfit{s}]\ \mathcal{P}[\mathbfit{s},\mathbfit{x}] = \int\mathcal{D}[\mathbfit{s}]\ \mathcal{P}[\mathbfit{s}]\mathcal{P}[\mathbfit{x}|\mathbfit{s}]\,. \label{eq-generic-marginalization} \qquad(3.1)

In DPWS, we use Equation 2.7 to compute ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] which involves generating independent input trajectories from ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}]. However, this this is not the optimal Monte Carlo technique to perform the marginalization. The generated input trajectories are independent from the output trajectory ๐’™\mathbfit{x}. Thus, we ignore the causal connection between ๐’”\mathbfit{s} and ๐’™\mathbfit{x}, and we typically end up sampling trajectories ๐’”โ‹†\mathbfit{s}^\star whose likelihoods ๐’ซ[๐’™|๐’”โ‹†]\mathcal{P}[\mathbfit{x}|\mathbfit{s}^\star] are very small. Then, most sampled trajectories have small integral weights, and only very few samples provide a significant contribution to the average. The variance of the result is then very large because the effective sample size is much smaller than the total sample size. The use of ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] as the sampling distribution is thus only practical in cases where the dependence of the output on the input is not too strong. It follows that this sampling scheme works best when the mutual information is not too large 1.

This is a well known Monte Carlo sampling problem and a large number of techniques have been developed to solve it. The two variants of our scheme, RR-PWS and TI-PWS, both make use of ideas from statistical physics for the efficient computation of free energies.

Table 3.1: Translation to the notation of statistical physics. The definitions of ๐’ฐ\mathcal{U} and ๐’ฐ0\mathcal{U}_0 that are used here are given in Equation 3.6, Equation 3.2.
๐’ซ[๐’”,๐’™]\mathcal{P}[\mathbfit{s},\mathbfit{x}] eโˆ’๐’ฐ[๐’”,๐’™]e^{-\mathcal{U}[\mathbfit{s}, \mathbfit{x}]}
๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] 1๐’ต0[๐’™]eโˆ’๐’ฐ0[๐’”]\frac{1}{\mathcal{Z}_0[\mathbfit{x}]} e^{-\mathcal{U}_0[\mathbfit{s}]}
๐’ซ[๐’”|๐’™]\mathcal{P}[\mathbfit{s}|\mathbfit{x}] 1๐’ต[๐’™]eโˆ’๐’ฐ[๐’”,๐’™]\frac{1}{\mathcal{Z}[\mathbfit{x}]} e^{-\mathcal{U}[\mathbfit{s}, \mathbfit{x}]}
11 ๐’ต0[๐’™]\mathcal{Z}_0[\mathbfit{x}]
๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] ๐’ต[๐’™]\mathcal{Z}[\mathbfit{x}]
๐’ซ[๐’™|๐’”]\mathcal{P}[\mathbfit{x}|\mathbfit{s}] eโˆ’ฮ”๐’ฐ[๐’”,๐’™]e^{-\Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}]}

To understand how these ideas can be applied to compute the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}], it is helpful to rephrase the marginalization integral in Equation 3.1 in the language of statistical physics. In this language, ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] corresponds to the normalization constant, or partition function, of the Boltzmann distribution for the potential2

๐’ฐ[๐’”,๐’™]=โˆ’ln๐’ซ[๐’”,๐’™].(3.2)\label{eq-h1} \mathcal{U}[\mathbfit{s},\mathbfit{x}] = -\ln\mathcal{P}[\mathbfit{s},\mathbfit{x}] \,. \qquad(3.2)

In Equation 3.2, ๐’”\mathbfit{s} is interpreted as a variable in the configuration space, while ๐’™\mathbfit{x} acts as an auxiliary variable, i.e., a parameter. Note that both ๐’”\mathbfit{s} and ๐’™\mathbfit{x} still represent trajectories. For this potential, the partition function is given by

๐’ต[๐’™]=โˆซ๐’Ÿ[๐’”]eโˆ’๐’ฐ[๐’”,๐’™].(3.3)\mathcal{Z}[\mathbfit{x}] = \int\mathcal{D}[\mathbfit{s}]\; e^{-\mathcal{U}[\mathbfit{s},\mathbfit{x}]} \,. \label{eq-partition-function} \qquad(3.3)

The integral only runs over the configuration space, i.e. we integrate only with respect to ๐’”\mathbfit{s}. By inserting the expression for ๐’ฐ[๐’”,๐’™]\mathcal{U}[\mathbfit{s},\mathbfit{x}], we see that the partition function is exactly equal to the marginal probability of the output, i.e. ๐’ต[๐’™]=๐’ซ[๐’™]\mathcal{Z}[\mathbfit{x}] = \mathcal{P}[\mathbfit{x}]. The free energy is given by

โ„ฑ[๐’™]=โˆ’ln๐’ต[๐’™]=โˆ’ln๐’ซ[๐’™].(3.4)\mathcal{F}[\mathbfit{x}] = -\ln \mathcal{Z}[\mathbfit{x}] = -\ln \mathcal{P}[\mathbfit{x}]\,. \label{eq-free-energy} \qquad(3.4)

In statistical physics it is well known that the free energy cannot be directly measured from a simulation. Instead, one estimates the free-energy difference

ฮ”โ„ฑ[๐’™]=โ„ฑ[๐’™]โˆ’โ„ฑ0[๐’™]=โˆ’ln๐’ต[๐’™]๐’ต0[๐’™](3.5)\Delta\mathcal{F}[\mathbfit{x}] = \mathcal{F}[\mathbfit{x}] - \mathcal{F}_0[\mathbfit{x}] = -\ln \frac{\mathcal{Z}[\mathbfit{x}]}{\mathcal{Z}_0[\mathbfit{x}]} \label{eq-free-energy-difference} \qquad(3.5)

between the system and a reference system with known free energy โ„ฑ0[๐’™]\mathcal{F}_0[\mathbfit{x}]. The reference system can be freely chosen and is usually defined using a Boltzmann distribution for a convenient reference potential ๐’ฐ0[๐’”,๐’™]\mathcal{U}_0[\mathbfit{s}, \mathbfit{x}]. In our case, a natural choice of reference potential is

๐’ฐ0[๐’”,๐’™]=โˆ’ln๐’ซ[๐’”](3.6)\label{eq-h0} \mathcal{U}_0[\mathbfit{s},\mathbfit{x}]=-\ln\mathcal{P}[\mathbfit{s}] \qquad(3.6)

with the corresponding partition function being simply

๐’ต0[๐’™]=โˆซ๐’Ÿ[๐’”]๐’ซ[๐’”]=1.(3.7)\mathcal{Z}_0[\mathbfit{x}]=\int\mathcal{D}[\mathbfit{s}]\ \mathcal{P}[\mathbfit{s}]=1\,. \qquad(3.7)

The reference free energy therefore is zero (โ„ฑ0[๐’™]=โˆ’ln๐’ต0[๐’™]=0\mathcal{F}_0[\mathbfit{x}]=-\ln\mathcal{Z}_0[\mathbfit{x}]=0). Hence, the free-energy difference is

ฮ”โ„ฑ[๐’™]=โ„ฑ[๐’™]=โˆ’ln๐’ซ[๐’™].(3.8)\Delta\mathcal{F}[\mathbfit{x}]= \mathcal{F}[\mathbfit{x}] = -\ln\mathcal{P}[\mathbfit{x}]\,. \label{eq-free-energy-difference-equals-lnp} \qquad(3.8)

Note that in our case the reference potential ๐’ฐ0[๐’”,๐’™]=โˆ’ln๐’ซ[๐’”]\mathcal{U}_0[\mathbfit{s},\mathbfit{x}]=-\ln\mathcal{P}[\mathbfit{s}] does not depend on the output trajectory ๐’™\mathbfit{x}, i.e. ๐’ฐ0[๐’”,๐’™]โ‰ก๐’ฐ0[๐’”]\mathcal{U}_0[\mathbfit{s},\mathbfit{x}]\equiv\mathcal{U}_0[\mathbfit{s}]. It describes a non-interacting version of our input-output system where the input trajectories evolve independently of the fixed output trajectory ๐’™\mathbfit{x}.

What is the interaction between the output ๐’™\mathbfit{x} and the input trajectory ensemble? We define the interaction potential ฮ”๐’ฐ[๐’”,๐’™]\Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}] through

๐’ฐ[๐’”,๐’™]=๐’ฐ0[๐’”]+ฮ”๐’ฐ[๐’”,๐’™].(3.9)\mathcal{U}[\mathbfit{s}, \mathbfit{x}] = \mathcal{U}_0[\mathbfit{s}] + \Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}] \,. \label{eq-interaction-potential} \qquad(3.9)

The interaction potential makes it apparent that the distribution of ๐’”\mathbfit{s} corresponding to the potential ๐’ฐ[๐’”,๐’™]\mathcal{U}[\mathbfit{s}, \mathbfit{x}] is biased by ๐’™\mathbfit{x} with respect to the distribution corresponding to the reference potential ๐’ฐ0[๐’”]\mathcal{U}_0[\mathbfit{s}]. By inserting the expressions for ๐’ฐ0[๐’”]\mathcal{U}_0[\mathbfit{s}] and ๐’ฐ[๐’”,๐’™]\mathcal{U}[\mathbfit{s}, \mathbfit{x}] into Equation 3.9 we see that

ฮ”๐’ฐ[๐’”,๐’™]=โˆ’ln๐’ซ[๐’™|๐’”]=โˆ’lnP(x0|s0)โˆ’โˆซ0Tdtโ„’t[๐’”,๐’™](3.10)\begin{aligned} \Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}] &= -\ln\mathcal{P}[\mathbfit{x}|\mathbfit{s}] \\ &= -\ln\mathrm{P}(x_0|s_0)-\int^T_0 dt\ \mathcal{L}_t[\mathbfit{s}, \mathbfit{x}] \end{aligned} \label{eq-boltzmann-weight} \qquad(3.10)

where โ„’t[๐’”,๐’™]\mathcal{L}_t[\mathbfit{s}, \mathbfit{x}] is given by Equation 2.15 and can be directly computed from the master equation. This expression illustrates that the interaction of the output trajectory ๐’™\mathbfit{x} with the ensemble of input trajectories is characterized by the trajectory likelihood ๐’ซ[๐’™|๐’”]\mathcal{P}[\mathbfit{x}|\mathbfit{s}].

To summarize, in this section we have introduced notation (see Table 3.1) showing that computing a marginalization integral is equivalent to the computation of a free-energy difference. This picture allows us to distinguish between two ensembles for ๐’”\mathbfit{s}, the non-interacting ensemble distributed according to exp(โˆ’๐’ฐ0[๐’”])=๐’ซ[๐’”]\exp(-\mathcal{U}_0[\mathbfit{s}])=\mathcal{P}[\mathbfit{s}], and the interacting ensemble distributed according to exp(โˆ’๐’ฐ[๐’”,๐’™])โˆ๐’ซ[๐’”|๐’™]\exp(-\mathcal{U}[\mathbfit{s},\mathbfit{x}])\propto\mathcal{P}[\mathbfit{s}|\mathbfit{x}]. With these notions we can rewrite the brute force estimate in Direct PWS (Chapter 2) as

๐’ซ[๐’™]=๐’ต[๐’™]๐’ต0[๐’™]=โŸจeโˆ’ฮ”๐’ฐ[๐’”,๐’™]โŸฉ0(3.11)\begin{aligned} \mathcal{P}[\mathbfit{x}] = \frac{\mathcal{Z}[\mathbfit{x}]}{\mathcal{Z}_0[\mathbfit{x}]} &= \langle e^{-\Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}]} \rangle_0 \label{eq-boltzmann-average} \end{aligned} \qquad(3.11)

where the notation โŸจโ‹ฏโŸฉ0\langle\cdots\rangle_0 refers to an average with respect to the non-interacting ensemble. By inserting the expressions for ๐’ฐ0\mathcal{U}_0 and ฮ”๐’ฐ\Delta\mathcal{U}, it is easy to verify that this estimate is equivalent to Equation 2.7.

The more advanced variants of PWS introduced below leverage the analogy with statistical physics to improve the efficiency of marginalization. RR-PWS draws ideas from soft condensed matter simulations by recognizing that Equation 3.5 has the same form as the excess chemical potential of a polymer for which efficient computation techniques have been developed  [2,11]. Meanwhile, TI-PWS is inspired by Transition Path Sampling (TPS) for sampling rare trajectories  [8] and uses thermodynamic integration for free-energy estimation.

3.3 RR-PWS

Figure 3.1: Illustration of one step of the bootstrap particle filter in RR-PWS. We start with a set of trajectories ๐’”[0,iโˆ’1]k\mathbfit{s}^k_{[0,i-1]} with time span [ฯ„0,ฯ„iโˆ’1][\tau_0,\tau_{i-1}] (left panel). In the next step we propagate these trajectories forward in time to ฯ„i\tau_i, according to ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] (central panel). Then we resample the trajectories according to the Boltzmann weights of their most recent segments, effectively eliminating or duplicating individual segments. An example outcome of the resampling step is shown in the right panel where the bottom trajectory was duplicated and one of the top trajectories was eliminated. These steps are repeated for each segment, until a set of input trajectories of the desired length is generated. The intermediate resampling steps bias the trajectory distribution from ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] towards ๐’ซ[๐’”|๐’™]\mathcal{P}[\mathbfit{s}|\mathbfit{x}].

In Rosenbluth-Rosenbluth PWS we compute the free-energy difference ฮ”โ„ฑ\Delta\mathcal{F} between the ideal system ๐’ฐ0\mathcal{U}_0 and ๐’ฐ\mathcal{U} in a single simulation just like in the brute force method. However, instead of generating ๐’”\mathbfit{s} trajectories in an uncorrelated fashion according to exp(โˆ’๐’ฐ0[๐’”])=๐’ซ[๐’”]\exp(-\mathcal{U}_0[\mathbfit{s}])=\mathcal{P}[\mathbfit{s}], we bias our sampling distribution towards exp(โˆ’๐’ฐ[๐’”,๐’™])โˆ๐’ซ[๐’”|๐’™]\exp(-\mathcal{U}[\mathbfit{s}, \mathbfit{x}])\propto\mathcal{P}[\mathbfit{s}|\mathbfit{x}] to reduce the sampling problems found in DPWS.

The classical scheme for biasing the sampling distribution in polymer physics is due to Rosenbluth and Rosenbluth  [12] in their study of self-avoiding chains. A substantial improvement of the Rosenbluth algorithm was achieved by Grassberger, by generating polymers using pruning and enrichment steps, thereby eliminating configurations that do not significantly contribute to the average. This scheme is known as the pruned-enriched Rosenbluth method, or PERM  [3]. While PERM is much more powerful than the standard Rosenbluth algorithm, its main drawback is that it requires careful tuning of the pruning and enrichment schedule to achieve optimal convergence. Therefore we have opted to use a technique that is similar in spirit to PERM but requires less tuning, the bootstrap particle filter  [13]. We will describe how to use PWS with a particle filter below. That said, we want to stress that the particle filter can easily be replaced by PERM or other related methods  [14]. Also schemes inspired by variants of Forward Flux Sampling  [15,16] could be developed.

3.3.1 Bootstrap Particle Filter

In the methods discussed above, a polymer is grown monomer by monomer. In a continuous-time Markov process this translates to trajectories being grown segment by segment. To define the segments, we choose a time discretization 0<ฯ„1<ฯ„2<โ‹ฏ<ฯ„nโˆ’1<T0<\tau_1<\tau_2<\cdots<\tau_{n-1}<T. Thus, each trajectory ๐’”\mathbfit{s} of duration TT consists of nn segments where we denote the segment between ฯ„i\tau_i and ฯ„j\tau_j by ๐’”[i,j]\mathbfit{s}_{[i,j]} (we define ฯ„0=0\tau_0=0 and ฯ„n=T\tau_n=T). The particle filter uses the following procedure to grow an ensemble of trajectories segment by segment:

  1. Generate MM starting points s01,โ€ฆ,s0Ms^1_0, \ldots, s^M_0 according to the initial condition of the input signal P(s0)\mathrm{P}(s_0).

  2. Iterate for i=1,โ€ฆ,ni=1,\ldots,n:

    1. Starting with an ensemble of MM partial trajectories of duration ฯ„iโˆ’1\tau_{i-1} (if i=1i=1 an ensemble of starting points) which we label ๐’”[0,iโˆ’1]k\mathbfit{s}^k_{[0,i-1]} for k=1,โ€ฆ,Mk=1,\ldots,M: (๐’”[0,iโˆ’1]1,โ€ฆ,๐’”[0,iโˆ’1]M),\left(\mathbfit{s}^1_{[0,i-1]}, \ldots, \mathbfit{s}^M_{[0,i-1]}\right)\,, propagate each trajectory (or each starting point) forward in time from ฯ„iโˆ’1\tau_{i-1} to ฯ„i\tau_{i}. Propagation is performed according to the natural dynamics of ๐’”\mathbfit{s}, i.e. generating a new segment ๐’”[iโˆ’1,i]k\mathbfit{s}^k_{[i-1,i]} with probability pigen(k)=๐’ซ[๐’”[iโˆ’1,i]k|๐’”[0,iโˆ’1]k]=eโˆ’๐’ฐ0[๐’”[iโˆ’1,i]k]p^\text{gen}_i(k) = \mathcal{P}\left[\mathbfit{s}^k_{[i-1,i]}|\mathbfit{s}^k_{[0,i-1]}\right] = e^{-\mathcal{U}_0\left[\mathbfit{s}^k_{[i-1,i]}\right]} for k=1,โ€ฆ,Mk=1,\ldots,M.

    2. Compute the Boltzmann weight Uik=ฮ”๐’ฐ[๐’”[iโˆ’1,i]k,๐’™[iโˆ’1,i]]U^k_i = \Delta\mathcal{U}[\mathbfit{s}^k_{[i-1,i]},\mathbfit{x}_{[i-1,i]}] of each new segment. This Boltzmann weight of a segment from ฯ„iโˆ’1\tau_{i-1} to ฯ„i\tau_i can be expressed as Uik=โˆ’ฮด1ilnP(x0|s0)โˆ’โˆซฯ„iโˆ’1ฯ„idtโ„’t[๐’”[iโˆ’1,i]k,๐’™[iโˆ’1,i]],U^k_i = -\delta_{1i} \ln\mathrm{P}(x_0|s_0) -\int^{\tau_i}_{\tau_{i-1}} dt\ \mathcal{L}_t[\mathbfit{s}^k_{[i-1,i]}, \mathbfit{x}_{[i-1,i]}]\,, \label{eq-weight_segment} see Equation 3.10, and is therefore straightforward to compute from the master equation.

    3. Sample MM times from the distribution piselect(k)=eโˆ’Uikwip^\text{select}_i(k) = \frac{e^{-U^k_i}}{w_i} \label{eq-index-prob} where the Rosenbluth weight wiw_i is defined as wi=โˆ‘k=1Meโˆ’Uik.w_i = \sum^M_{k=1} e^{-U^k_i}\,. This sampling procedure yields MM randomly drawn indices โ„“i1,โ€ฆ,โ„“iM\ell^1_i, \ldots, \ell^M_i. Each โ„“ik\ell^k_i is an index that lies in the range from 1,โ€ฆ,M1,\ldots,M and that points to one of the trajectories that have been generated up to ฯ„i\tau_i. To continue the sampling procedure, we relabel the indices such that the resampled set of trajectories is defined by ๐’”ฬƒ[0,i]kโ†๐’”[0,i]โ„“ik\tilde{\mathbfit{s}}^k_{[0,i]} \gets \mathbfit{s}^{\ell^k_i}_{[0,i]} for k=1,โ€ฆ,Mk=1,\ldots,M. The list (๐’”ฬƒ[0,i]1,โ€ฆ,๐’”ฬƒ[0,i]M)\left( \tilde{\mathbfit{s}}^1_{[0,i]}, \ldots, \tilde{\mathbfit{s}}^M_{[0,i]} \right) is subsequently used as the input for the next iteration of the algorithm.

The normalized Rosenbluth factor of the final ensemble is then given by

๐’ฒ=โˆi=1nwiM.(3.12)\mathcal{W} = \prod^n_{i=1} \frac{w_i}{M} \,. \label{eq-normalized-rosenbluth-factor} \qquad(3.12)

As shown in Section 3.3.3, we can derive an unbiased estimate for the desired ratio ๐’ต[๐’™]/๐’ต0[๐’™]=๐’ซ[๐’™]\mathcal{Z}[\mathbfit{x}]/\mathcal{Z}_0[\mathbfit{x}] = \mathcal{P}[\mathbfit{x}] based on the Rosenbluth factor:

๐’ซฬ‚[๐’™]=P(x0)๐’ฒ(3.13)\hat{\mathcal{P}}[\mathbfit{x}] = \mathrm{P}(x_0)\ \mathcal{W} \label{eq-smc-marginal} \qquad(3.13)

with P(x0)\mathrm{P}(x_0) being the probability of the initial output x0x_0. The particle filter can therefore be integrated into the DPWS algorithm to compute the marginal density ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}], substituting the brute-force estimate given in Equation 2.7. We call the resulting algorithm to compute the mutual information RR-PWS.

3.3.2 Intuitive Justification of the Algorithm

First note that steps 1 and 2(a) of the procedure above involve just propagating MM trajectories in parallel, according to ๐’ซ[๐’”]=exp(โˆ’๐’ฐ0[๐’”])\mathcal{P}[\mathbfit{s}]=\exp(-\mathcal{U}_0[\mathbfit{s}]). The interesting steps are 2(b-c) where we eliminate or duplicate some of the trajectories according to the Boltzmann weights of the most recent segment. Note, that in general the list of indices (โ„“i1,โ€ฆ,โ„“iM)(\ell^1_i,\ldots,\ell^M_i) that are sampled in step 2(c) will contain duplicates (โ„“ik=โ„“ikโ€ฒ\ell^k_i=\ell^{k^\prime}_i for kโ‰ kโ€ฒk\neq k^\prime), thus cloning the corresponding trajectory. Concomitantly, the indices โ„“i1,โ€ฆ,โ„“iM\ell^1_i, \ldots, \ell^M_i may not include every original index 1,โ€ฆ,M1,\ldots,M, therefore eliminating some trajectories. Since indices of trajectories with high Boltzmann weight are more likely to be sampled from ?eq-index-prob, this scheme biases the sampling distribution towards trajectories with large Boltzmann weight, ensuring that we are only spending computational effort on propagating trajectories which contribute significantly to the marginalization integral.

Hence, at its heart, the particle filter is an importance sampling scheme. It produces samples that are biased towards the ideal importance sampling distribution exp(โˆ’๐’ฐ0[๐’”])exp(โˆ’ฮ”๐’ฐ[๐’”,๐’™])\exp(-\mathcal{U}_0[\mathbfit{s}])\exp(-\Delta\mathcal{U}[\mathbfit{s},\mathbfit{x}]), i.e., towards to the Boltzmann distribution of the interacting ensemble. The Rosenbluth factor ๐’ฒ\mathcal{W} represents the importance sampling weight which would be required to correct for the sampling bias when computing averages using the sampled trajectories. Importantly for our case, the Rosenbluth factor can also be used to estimate the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. For illustration of the algorithm, one iteration of the particle filter is presented schematically in Figure 3.1.

3.3.3 Detailed Justification

This subsection justifies the marginal probability estimate shown in Equation 3.13 in greater detail, and may be skipped on first reading. We show that the bootstrap particle filter provides a consistent estimator for the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}], or, equivalently, the ratio of partition functions ๐’ต[๐’™]/๐’ต0[๐’™]\mathcal{Z}[\mathbfit{x}]/\mathcal{Z}_0[\mathbfit{x}]. The result that this estimate is also unbiased is more difficult to establish; a proof is given by Del Moral  [17].

We structure our justification of the particle filter into three steps. We first give a brief description of how a resampling procedure can generally be used to generate samples approximating a target distribution when only samples from a different distribution are available. Secondly, we use these insights to explain how the resampling procedure used in the particle filter generates trajectories whose distribution is biased towards ๐’ซ[๐’”|๐’™]\mathcal{P}[\mathbfit{s}|\mathbfit{x}], even though we only generate trajectories according to ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}]. Finally, we use this result to show that the particle filter provides a consistent estimate for ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}].

3.3.3.1 Sampling and resampling

Sampling and then resampling is a strategy to use samples ๐’”1,โ€ฆ,๐’”M\mathbfit{s}^1,\ldots,\mathbfit{s}^M from a given prior distribution f[๐’”]f[\mathbfit{s}] to generate approximate 3 samples from a different distribution of interest, with density proportional to the product h[๐’”]=f[๐’”]g[๐’”]h[\mathbfit{s}]=f[\mathbfit{s}]g[\mathbfit{s}]. In general, h[๐’”]h[\mathbfit{s}] is not normalized, and we denote the corresponding normalized probability density by hฬ‚[๐’”]=h[๐’”]/โˆซ๐’Ÿ[๐’”]h[๐’”]\hat{h}[\mathbfit{s}]=h[\mathbfit{s}]/\int\mathcal{D}[\mathbfit{s}]h[\mathbfit{s}]. To generate samples from hฬ‚[๐’”]\hat{h}[\mathbfit{s}], we assign each of the existing samples from f[๐’”]f[\mathbfit{s}] a normalized weight

Wk=g[๐’”k]โˆ‘j=1Mg[๐’”j].(3.14)W^k = \frac{g[\mathbfit{s}^k]}{\sum^M_{j=1}g[\mathbfit{s}^j]}\,. \label{eq-weights-appendix} \qquad(3.14)

Then, by sampling from the discrete set {๐’”1,โ€ฆ,๐’”M}\{\mathbfit{s}^1,\ldots,\mathbfit{s}^M\} according to the assigned weights W1,โ€ฆ,WMW^1,\ldots,W^M, we select samples that are approximately distributed according to hฬ‚[๐’”]\hat{h}[\mathbfit{s}]. Indeed, for Mโ†’โˆžM\rightarrow\infty the distribution of the resulting samples approaches the density hฬ‚[๐’”]\hat{h}[\mathbfit{s}]  [18]. We use resampling at each iteration of the algorithm of Section 3.3 to regularly prune those trajectories with low overall contribution to the marginalization integral.

3.3.3.2 Particle filter

In the bootstrap particle filter, at each iteration, we start with a set of trajectories ๐’”[0,iโˆ’1]1,โ€ฆ,๐’”[0,iโˆ’1]M\mathbfit{s}^1_{[0,i-1]},\ldots,\mathbfit{s}^M_{[0,i-1]}. In each iteration of the particle filter, the goal is to produce a set of elongated trajectories (from time step (iโˆ’1)โ†’i(i-1) \to i) whose distribution tends towards ๐’ซ[๐’”[0,i]|๐’™[0,i]]\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i]}]. By iterating such a procedure, we can generate a set of trajectories distributed approximately according to ๐’ซ[๐’”[0,n]|๐’™[0,n]]\mathcal{P}[\mathbfit{s}_{[0,n]}|\mathbfit{x}_{[0,n]}] for any n>1n>1. Thus, the particle filter is a biased sampling scheme which provides an approximation of ๐’ซ[๐’”[0,n]|๐’™[0,n]]\mathcal{P}[\mathbfit{s}_{[0,n]}|\mathbfit{x}_{[0,n]}]. Moreover, using the particle filter we can also compute the corresponding importance weights which can be used to compute the marginal probability ๐’ซ[๐’™[0,n]]\mathcal{P}[\mathbfit{x}_{[0,n]}] for n=1,2,โ€ฆn=1,2,\ldots.

We now take a closer look at one iteration of the particle filter. Start with a set of trajectories in [ฯ„0,ฯ„iโˆ’1][\tau_0,\tau_{i-1}], denoted by {๐’”[0,iโˆ’1]1,โ€ฆ,๐’”[0,iโˆ’1]M}\left\{\mathbfit{s}^1_{[0,i-1]},\ldots,\mathbfit{s}^M_{[0,i-1]}\right\}. These trajectories are then propagated forward to time ฯ„i\tau_i, by adding a new segment ๐’”[iโˆ’1,i]k\mathbfit{s}^k_{[i-1,i]} to the trajectory ๐’”[0,iโˆ’1]k\mathbfit{s}^k_{[0,i-1]} for k=1,โ€ฆ,Mk=1,\ldots,M. Each new segment is generated from the distribution ๐’ซ[๐’”[iโˆ’1,i]k|๐’”[0,iโˆ’1]k]\mathcal{P}[\mathbfit{s}^k_{[i-1,i]}|\mathbfit{s}^k_{[0,i-1]}] such that the propagation step results in a set of trajectories {๐’”[0,i]1,โ€ฆ,๐’”[0,i]M}\{\mathbfit{s}^1_{[0,i]},\ldots,\mathbfit{s}^M_{[0,i]}\}, distributed according to f[๐’”[0,i]]=๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]]f[\mathbfit{s}_{[0,i]}]=\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}].

Next, we resample from the set of trajectories, with the goal of producing a set of trajectories distributed according to the target density hฬ‚[๐’”]=๐’ซ[๐’”[0,i]|๐’™[0,i]]\hat{h}[\mathbfit{s}] = \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i]}]. Thus, we have to find the appropriate weighting function g[๐’”[0,i]]g[\mathbfit{s}_{[0,i]}] in order to approximately produce samples according to the target distribution. By choosing g[๐’”[0,i]]=exp{โˆ’ฮ”๐’ฐ[๐’”[iโˆ’1,i],๐’™[iโˆ’1,i]]}=๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]]g[\mathbfit{s}_{[0,i]}] = \exp\left\{ -\Delta\mathcal{U}[\mathbfit{s}_{[i-1,i]}, \mathbfit{x}_{[i-1,i]}] \right\} = \mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}_{[0,i]}], we generate normalized weights

Wik=๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]k]โˆ‘j=1M๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]j],(3.15)W^k_i = \frac{\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}^k_{[0,i]}]}{\sum^M_{j=1} \mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}^j_{[0,i]}]}\,, \label{eq-normalized-weights} \qquad(3.15)

cf. Equation 3.14. Note that this is the same choice of weighting function as in Section 3.3, ?eq-index-prob. By comparison with the notation used there, we see that the Boltzmann factors UikU^k_i and Rosenbluth weights wiw_i were defined such that we can express the normalized weight equivalently as

Wik=eโˆ’Uikwi.(3.16)W^k_i = \frac{e^{-U^k_i}}{w_i} \,. \qquad(3.16)

Why is this choice of weighting function the correct one? First, observe that resampling with the normalized weights of Equation 3.15 produces samples approximately distributed according to

h[๐’”[0,i]]=f[๐’”[0,i]]g[๐’”[0,i]]=๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]]๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]].(3.17)\begin{aligned} h[\mathbfit{s}_{[0,i]}] &= f[\mathbfit{s}_{[0,i]}] g[\mathbfit{s}_{[0,i]}] \\ &= \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]\ \mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}_{[0,i]}]\,. \end{aligned} \label{eq-h-unnormalized} \qquad(3.17)

What remains to be shown is that this density h[๐’”[0,i]]h[\mathbfit{s}_{[0,i]}], when normalized, becomes the desired target distribution ๐’ซ[๐’”[0,i]|๐’™[0,i]]\mathcal{P}[\mathbfit{s}_{[0,i]}| \mathbfit{x}_{[0,i]}].

To do so, we need to rewrite the expression for g[๐’”[0,i]]=๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]]g[\mathbfit{s}_{[0,i]}]=\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}_{[0,i]}] using Bayesโ€™ theorem

g[๐’”[0,i]]=๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1],๐’™[iโˆ’1,i]]๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]].(3.18)g[\mathbfit{s}_{[0,i]}]=\frac{ \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{x}_{[i-1,i]}]\ \mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}]}{\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]}\,. \qquad(3.18)

Notice that the first term of the numerator can be written as ๐’ซ[๐’”[0,i]|๐’™[0,i]]\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i]}]. After inserting this result into Equation 3.17, we obtain

h[๐’”[0,i]]=๐’ซ[๐’”[0,i]|๐’™[0,i]]๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]].(3.19)h[\mathbfit{s}_{[0,i]}] = \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i]}]\ \mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}]\,. \qquad(3.19)

The second term in this product is a constant, since ๐’™\mathbfit{x} is fixed. The first term is a normalized probability density for ๐’”[0,i]\mathbfit{s}_{[0,i]}. Therefore we find that the normalized density corresponding to h[๐’”[0,i]]h[\mathbfit{s}_{[0,i]}] is

hฬ‚[๐’”[0,i]]=๐’ซ[๐’”[0,i]|๐’™[0,i]].(3.20)\hat{h}[\mathbfit{s}_{[0,i]}] = \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i]}]\,. \qquad(3.20)

Consequently, this is the distribution that is approximated by the set of trajectories at the end of the ii-th iteration of the particle filter, which is what we wanted to show. At its heart, the particle filter is therefore an algorithm to produce samples that are approximately distributed according to ๐’ซ[๐’”|๐’™]\mathcal{P}[\mathbfit{s}|\mathbfit{x}].

3.3.3.3 Marginal probability estimate

We now use these insights to derive an estimate of ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. We start by noting that the marginal density of the ii-th output segment, ๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}], is given by

๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]=โˆซ๐’Ÿ[๐’”[0,i]]๐’ซ[๐’™[iโˆ’1,i],๐’”[0,i]|๐’™[0,iโˆ’1]]=โˆซ๐’Ÿ[๐’”[0,i]]๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]]g[๐’”[0,i]].(3.21)\begin{aligned} &\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}] \\ &=\int\mathcal{D}[\mathbfit{s}_{[0,i]}]\ \mathcal{P}[\mathbfit{x}_{[i-1,i]},\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]\\ &=\int\mathcal{D}[\mathbfit{s}_{[0,i]}]\ \mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]\ g[\mathbfit{s}_{[0,i]}]\,. \end{aligned} \qquad(3.21)

The third line follows from the definition of g[๐’”[0,i]]=๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1],๐’”[0,i]]g[\mathbfit{s}_{[0,i]}]=\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]},\mathbfit{s}_{[0,i]}]. Hence, we find that the probability ๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}] can be expressed as the average

๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]=โŸจg[๐’”[0,i]]โŸฉ๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]].(3.22)\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}] = \left\langle g[\mathbfit{s}_{[0,i]}] \right\rangle_{\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]}\,. \label{eq-marginal-segment-average} \qquad(3.22)

In principle, this average can be computed using a Monte Carlo scheme, using trajectories generated from ๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]]\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}]. Notice that at each iteration of the particle filter, we do dispose of a set of trajectories ๐’”[0,i]1,โ€ฆ,๐’”[0,i]M\mathbfit{s}^1_{[0,i]},\ldots,\mathbfit{s}^M_{[0,i]} which are approximately distributed according to ๐’ซ[๐’”[0,i]|๐’™[0,iโˆ’1]]\mathcal{P}[\mathbfit{s}_{[0,i]}|\mathbfit{x}_{[0,i-1]}] above. Therefore, we can compute the average Equation 3.22 directly from the trajectories that are present for each iteration of the particle filter. With the notation from Section 3.3, using g[๐’”[0,i]k]=exp(โˆ’Uik)g[\mathbfit{s}^k_{[0,i]}]=\exp(-U^k_i), we thus obtain the estimate

๐’ซ[๐’™[iโˆ’1,i]|๐’™[0,iโˆ’1]]โ‰ˆ1Mโˆ‘k=1Meโˆ’Uik=wiM.(3.23)\mathcal{P}[\mathbfit{x}_{[i-1,i]}|\mathbfit{x}_{[0,i-1]}] \approx \frac{1}{M}\sum^M_{k=1} e^{-U^k_i} = \frac{w_i}{M}\,. \qquad(3.23)

The probability of the entire output trajectory ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] is given by the product

๐’ซ[๐’™]=P(x0)๐’ซ[๐’™[0,1]|x0]โ‹ฏ๐’ซ[๐’™[nโˆ’1,n]|๐’™[0,nโˆ’1]](3.24)\mathcal{P}[\mathbfit{x}] = \mathrm{P}(x_0) \mathcal{P}[\mathbfit{x}_{[0,1]}|x_0]\cdots \mathcal{P}[\mathbfit{x}_{[n-1,n]}|\mathbfit{x}_{[0,n-1]}] \qquad(3.24)

where P(x0)\mathrm{P}(x_0) is the probability of the initial output state x0x_0 which is assumed to be known. In conclusion, we arrive at the following estimate for the marginal output probability

๐’ซฬ‚[๐’™]=P(x0)โˆi=1nwiM(3.25)\hat{\mathcal{P}}[\mathbfit{x}] = \mathrm{P}(x_0) \prod^n_{i=1} \frac{w_i}{M} \qquad(3.25)

which is precisely Equation 3.13.

3.3.4 Tuning the Particle Filter

For the efficiency of the particle filter, it is important to carefully choose the number of segments nn. When segments are very short (i.e., when nn is large), the accumulated weights (?eq-weight_segment) tend to differ very little between newly generated segments ๐’”[iโˆ’1,i]k\mathbfit{s}^k_{[i-1,i]}. Hence, the pruning and enrichment of the segments is dominated by noise. In contrast, when the segments are very long, the distribution of Boltzmann weights UikU^k_i becomes very wide. Then only a small number of segments contribute substantially to the corresponding Rosenbluth weight wiw_i. Hence, to correctly choose nn, we need a measure that quantifies the variance in the trajectory weights of the nn particles. To this end, we follow Martino et al.  [19] and introduce an effective sample size (ESS)

Mi(eff)=wi2โˆ‘k=1M(eโˆ’Uik)2,(3.26)M^\text{(eff)}_i = \frac{w_i^2}{\sum^M_{k=1} \left(e^{-U^k_i}\right)^2}, \qquad(3.26)

which lies in the range 1โ‰คMi(eff)โ‰คM1 \leq M^\text{(eff)}_i \leq M; Mi(eff)=1M^\text{(eff)}_i = 1 if one trajectory has a much higher weight than all the others and Mi(eff)=MM^\text{(eff)}_i = M if all trajectories have the same weight. As a rule of thumb, we resample only when the Mi(eff)M^\text{(eff)}_i drops below M/2M/2. Additionally, as recommended in Ref.  [20], we use the systematic sampling algorithm to randomly draw the indices in step 2(c) which helps to reduce the variance; we find, however, the improvement over simple sampling is very minor. Using these techniques, the only parameter that needs to be chosen by hand for the particle filter is the ensemble size MM.

3.4 TI-PWS

Thermodynamic integration PWS (TI-PWS), is based on the analogy of marginalization integrals with free-energy computations. As before, we view the problem of computing the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] as equivalent to that of computing the free-energy difference between ensembles defined by the potentials ๐’ฐ0[๐’”,๐’™]\mathcal{U}_0[\mathbfit{s}, \mathbfit{x}] and ๐’ฐ[๐’”,๐’™]\mathcal{U}[\mathbfit{s}, \mathbfit{x}], respectively. For TI-PWS, we define a potential ๐’ฐฮธ[๐’”,๐’™]\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}] with a continuous parameter ฮธโˆˆ[0,1]\theta\in[0,1] that allows us to transform the ensemble from ๐’ฐ0\mathcal{U}_0 to ๐’ฐ=๐’ฐ1\mathcal{U}=\mathcal{U}_1. The corresponding partition function is

๐’ตฮธ[๐’™]=โˆซ๐’Ÿ[๐’”]eโˆ’๐’ฐฮธ[๐’”,๐’™].(3.27)\mathcal{Z}_\theta[\mathbfit{x}]=\int\mathcal{D}[\mathbfit{s}]\ e^{-\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}]} \,. \qquad(3.27)

For instance, for 0โ‰คฮธโ‰ค10\leq\theta\leq 1, we can define our potential as

๐’ฐฮธ[๐’”,๐’™]=๐’ฐ0[๐’”,๐’™]+ฮธฮ”๐’ฐ[๐’”,๐’™],(3.28)\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}]=\mathcal{U}_0[\mathbfit{s}, \mathbfit{x}]+\theta\,\Delta\mathcal{U}[\mathbfit{s},\mathbfit{x}]\,, \label{eq-ti-hamiltonian} \qquad(3.28)

such that eโˆ’๐’ฐฮธ[๐’”,๐’™]=๐’ซ[๐’”]๐’ซ[๐’™|๐’”]ฮธe^{-\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}]}=\mathcal{P}[\mathbfit{s}]\mathcal{P}[\mathbfit{x}|\mathbfit{s}]^\theta. Note that this is the simplest choice for a continuous transformation between ๐’ฐ0\mathcal{U}_0 and ๐’ฐ1\mathcal{U}_1, but by no means the only one. For reasons of computational efficiency, it can be beneficial to choose a different path between ๐’ฐ0\mathcal{U}_0 and ๐’ฐ1\mathcal{U}_1, depending on the specific system  [6]. Here we will not consider other paths however, and derive the thermodynamic integration estimate for the potential given in Equation 3.28.

To derive the thermodynamic integration estimate for the free-energy difference, we first compute the derivative of ln๐’ตฮธ[๐’™]\ln\mathcal{Z}_\theta[\mathbfit{x}] with respect to ฮธ\theta:

โˆ‚โˆ‚ฮธln๐’ตฮธ[๐’™]=1๐’ตฮธ[๐’™]โˆ‚โˆ‚ฮธโˆซ๐’Ÿ[๐’”]eโˆ’๐’ฐฮธ[๐’”,๐’™]=โˆ’โŸจโˆ‚๐’ฐฮธ[๐’”,๐’™]โˆ‚ฮธโŸฉฮธ=โˆ’โŸจฮ”๐’ฐ[๐’”,๐’™]โŸฉฮธ.(3.29)\begin{aligned} \frac{\partial}{\partial \theta} \ln\mathcal{Z}_\theta[\mathbfit{x}] &= \frac{1}{\mathcal{Z}_\theta[\mathbfit{x}]} \frac{\partial}{\partial \theta} \int\mathcal{D}[\mathbfit{s}]\ e^{-\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}]} \\ &= -\left\langle \frac{\partial \mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}]}{\partial\theta} \right\rangle_\theta\\ &= -\left\langle \Delta\mathcal{U}[\mathbfit{s},\mathbfit{x}] \right\rangle_\theta\,. \end{aligned} \label{eq-z-derivative} \qquad(3.29)

Thus, the derivative of ln๐’ตฮธ[๐’™]\ln\mathcal{Z}_\theta[\mathbfit{x}] is an average of the Boltzmann weight with respect to ๐’ซฮธ[๐’”|๐’™]\mathcal{P}_\theta[\mathbfit{s}|\mathbfit{x}] which is the ensemble distribution of ๐’”\mathbfit{s} given by

๐’ซฮธ[๐’”|๐’™]=1๐’ตฮธ[๐’™]eโˆ’๐’ฐฮธ[๐’”,๐’™].(3.30)\mathcal{P}_\theta[\mathbfit{s}|\mathbfit{x}] = \frac{1}{\mathcal{Z}_\theta[\mathbfit{x}]} e^{-\mathcal{U}_\theta[\mathbfit{s}, \mathbfit{x}]}\,. \qquad(3.30)

Integrating Equation 3.29 with respect to ฮธ\theta leads to the formula for the free-energy difference

ฮ”โ„ฑ[๐’™]=โˆ’โˆซ01dฮธโŸจฮ”๐’ฐ[๐’”,๐’™]โŸฉฮธ(3.31)\Delta\mathcal{F}[\mathbfit{x}] = -\int^1_0 d\theta\ \left\langle \Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}] \right\rangle_\theta \label{eq-ti-estimate} \qquad(3.31)

which is the fundamental identity underlying thermodynamic integration.

To compute the free-energy difference using Equation 3.31, we evaluate the integral with respect to ฮธ\theta numerically using Gaussian quadrature, while the inner average โŸจฮ”๐’ฐ[๐’”,๐’™]โŸฉฮธ\left\langle \Delta\mathcal{U}[\mathbfit{s}, \mathbfit{x}] \right\rangle_\theta is computed using MCMC simulations. To perform MCMC simulations in trajectory space we use ideas from transition path sampling (TPS). Specifically, we define a MCMC proposal distribution for trajectories using forward shooting and backward shooting  [8]. These proposals regrow either the end, or the beginning of a trajectory, respectively. A proposal is accepted according to the Metropolis criterion  [21]. Since the efficiency of MCMC samplers strongly depends on the proposal moves that are employed, we are certain that better MCMC estimates are possible with more sophisticated proposal distributions.

3.4.1 MCMC Sampling in Trajectory Space

TI-PWS relies on the computation of averages with respect to the ensembles corresponding to the interaction parameter ฮธ\theta, given by ๐’ซฮธ[๐’”|๐’™]โˆexp(โˆ’๐’ฐฮธ[๐’”,๐’™])\mathcal{P}_\theta[\mathbfit{s}|\mathbfit{x}]\propto\exp(-\mathcal{U}_\theta[\mathbfit{s}, \mathbfit{x}]). Sampling from this family of distributions using the SSA (Gillespie) algorithm is not possible. Instead, in this section, we show different ways of how to implement a Markov Chain Monte Carlo (MCMC) sampler in trajectory space to generate correctly distributed trajectories.

We can build an MCMC sampler in trajectory space using the Metropolis-Hastings algorithm. To create a Markov chain in trajectory space, we need to find a suitable proposal kernel, that generates a new trajectory ๐’”โ€ฒ\mathbfit{s}^\prime from a given trajectory ๐’”\mathbfit{s} with probability T(๐’”โ†’๐’”โ€ฒ)T(\mathbfit{s}\rightarrow\mathbfit{s}^\prime). We accept the proposal using the Metropolis criterion with probability

A(๐’”โ€ฒ,๐’”)=min(1,e๐’ฐฮธ[๐’”,๐’™]โˆ’๐’ฐฮธ[๐’”โ€ฒ,๐’™]T(๐’”โ€ฒโ†’๐’”)T(๐’”โ†’๐’”โ€ฒ))(3.32)A(\mathbfit{s}^\prime,\mathbfit{s})=\min\left( 1, e^{\mathcal{U}_\theta[\mathbfit{s}, \mathbfit{x}] - \mathcal{U}_\theta[\mathbfit{s}^\prime, \mathbfit{x}]}\frac{T(\mathbfit{s}^\prime\rightarrow\mathbfit{s})}{T(\mathbfit{s}\rightarrow\mathbfit{s}^\prime)} \right) \label{eq-metropolis-acceptance} \qquad(3.32)

to create a chain of trajectories with stationary distribution given by ๐’ซฮธ[๐’”|๐’™]=eโˆ’๐’ฐฮธ[๐’”,๐’™]/๐’ตฮธ[๐’™]\mathcal{P}_\theta[\mathbfit{s}|\mathbfit{x}]=e^{-\mathcal{U}_\theta[\mathbfit{s}, \mathbfit{x}]}/\mathcal{Z}_\theta[\mathbfit{x}] for 0โ‰คฮธโ‰ค10\leq\theta\leq 1. To ensure efficient convergence of the resulting Markov chain to its stationary distribution, the proposal kernel must balance two conflicting requirements. To efficiently explore the state space per unit amount of CPU time, the proposed trajectory ๐’”โ€ฒ\mathbfit{s}^\prime must be sufficiently different from the original trajectory ๐’”\mathbfit{s}, while at the same time it should not be so different that the acceptance probability becomes too low. Thus, the design of the proposal kernel is crucial for an efficient MCMC sampler, and we will discuss various strategies to create trial trajectories. Since different types of trial moves can easily be combined in a Metropolis-Hastings algorithm, the most efficient samplers often incorporate multiple complementary proposal strategies to improve the exploration speed of the trajectory space.

The simplest (and naรฏve) proposal kernel is to generate an entirely new trajectory ๐’”โ€ฒ\mathbfit{s}^\prime independent of ๐’”\mathbfit{s}, by sampling directly from ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] using the SSA. Hence, the transition kernel is given by T(๐’”โ†’๐’”โ€ฒ)=๐’ซ[๐’”โ€ฒ]T(\mathbfit{s}\rightarrow\mathbfit{s}^\prime)=\mathcal{P}[\mathbfit{s}^\prime] and a proposal ๐’”โ†’๐’”โ€ฒ\mathbfit{s}\rightarrow\mathbfit{s}^\prime is accepted with probability

A(๐’”โ€ฒ,๐’”)=min(1,e๐’ฐฮธ[๐’”,๐’™]โˆ’๐’ฐฮธ[๐’”โ€ฒ,๐’™]๐’ซ[๐’”]๐’ซ[๐’”โ€ฒ])=min(1,๐’ซ[๐’™|๐’”โ€ฒ]ฮธ๐’ซ[๐’™|๐’”]ฮธ)(3.33)\begin{aligned} A(\mathbfit{s}^\prime,\mathbfit{s}) &= \min\left( 1, e^{\mathcal{U}_\theta[\mathbfit{s}, \mathbfit{x}] - \mathcal{U}_\theta[\mathbfit{s}^\prime, \mathbfit{x}]}\frac{\mathcal{P}[\mathbfit{s}]}{\mathcal{P}[\mathbfit{s}^\prime]} \right) \\ &= \min\left( 1, \frac{\mathcal{P}[\mathbfit{x}|\mathbfit{s}^\prime]^\theta}{\mathcal{P}[\mathbfit{x}|\mathbfit{s}]^\theta} \right) \end{aligned} \label{eq-acceptance-rate} \qquad(3.33)

where the second line follows by using the definition of ๐’ฐฮธ[๐’”,๐’™]\mathcal{U}_\theta[\mathbfit{s},\mathbfit{x}] given in Equation 3.28. Although this simple scheme is correct, it should not be used in practice to compute ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. Indeed, on would get a better estimate of ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] by just using the same number of independent sample trajectories from ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] and using the brute-force scheme in Equation 2.7 without taking the detour of using MCMC to estimate the normalization constant.

Instead, an idea from transition path sampling is to only regenerate a part of the old trajectory as part of the proposal kernel  [22]. By not regenerating the entire trajectory, the new trajectory ๐’”โ€ฒ\mathbfit{s}^\prime is going to be correlated with the original trajectory ๐’”\mathbfit{s}, improving the acceptance rate. The simplest way to generate trial trajectories using a partial update is a move termed forward shooting in which a time point ฯ„\tau along the existing trajectory ๐’”\mathbfit{s} is randomly selected, and a new trajectory segment is regrown from this point to the end, resulting in the proposal ๐’”โ€ฒ\mathbfit{s}^\prime. Since the new segment is generated according to the unbiased input statistics, the acceptance probability for the proposed trajectory is given by Equation 3.33, i.e., the same as if the entire trajectory had been regenerated. If the input dynamics given by ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] are time-reversible, we can also perform a backward shooting move. Here, the beginning of ๐’”\mathbfit{s} is replaced by a new segment that is generated backwards in time. Assuming that the initial condition is the inputโ€™s steady state distribution, the corresponding acceptance probability of the backward shooting move is again given by Equation 3.33. Using these two moves we create an MCMC sampler where both ends of the trajectory are flexible, and thus if the trajectory is not too long, the chain will quickly relax to its stationary distribution.

For long trajectories it can prove to be a problem that the middle section is too inflexible when the proposal moves only regenerate either the beginning or the end of a trajectory. Therefore, one could additionally try to incorporate mid-section regrowth to make sure that also the middle parts of the trajectory become flexible. To regrow a middle segment with duration ฯ„\tau of a trajectory ๐’”\mathbfit{s}, we have to generate a new segment of duration ฯ„\tau according to the stochastic dynamics given by ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}] but with the additional condition that we have to connect both endpoints of the new segment to the existing trajectory. Although the starting point of the segment can be freely chosen, the challenge is to ensure that the end point of the new segment satisfies the end-point constraint. Stochastic processes that generate trajectories under the condition of hitting a specific point after a given duration ฯ„\tau are called stochastic bridging processes.

The simplest way to generate trajectories from a bridging process is by generating a trajectory segment of length ฯ„\tau from the normal stochastic process and rejecting the segment if it does not hit the correct end point  [23]. Clearly, this strategy is only feasible for very short segments and when the state space is discrete, as otherwise almost every generated segment will be rejected due to not hitting the correct end point. To avoid this problem, more efficient algorithms have been developed to simulate stochastic bridges for some types of stochastic processes. For diffusion processes, bridges can be simulated efficiently by introducing a guiding term into the corresponding Langevin equation  [24]. For jump processes, bridges can be simulated using particle filters  [25], by a weighted stochastic simulation algorithm (wSSA)  [26], or using random time-discretization (uniformization)  [23].

Further techniques to create a trajectory space MCMC samplers have been developed in the literature. Crooks  [27] describes a scheme to create MCMC moves for trajectories evolving in non-equilibrium dynamics, by making MCMC moves to change the trajectoriesโ€™ noise histories. In the Particle Markov Chain Monte Carlo (PMCMC) algorithm, proposal trajectories are generated using a particle filter and accepted with an appropriate Metropolis criterion  [28]. Another class of efficient samplers for Markov jump processes can be built using uniformization  [29].

3.5 Simple Application and Benchmark

To demonstrate the power of our framework and illustrate how the techniques of the previous sections can be used in practice, we apply PWS to a simple chemical reaction network. We consider a linearly coupled birth-death process which has been studied previously using a Gaussian model  [9], and by Duso and Zechner  [10] using an approximate technique, and we compare our results with these studies. This simple birth-death system serves to illustrate the main ideas of our approach and also highlights that linear systems can be distinctly non-Gaussian.

The code used to produce the PWS estimates was written in the Julia programming language  [30] and has been made freely available  [31,32]. For performing stochastic simulations we use the DifferentialEquations.jl package  [33].

We consider a stochastic process โˆ…โ‡ŒX\emptyset\rightleftharpoons\mathrm{X} of species X\mathrm{X} which is created at rate ฯ(t)\rho(t) and decays with constant rate ฮผ\mu per copy of X\mathrm{X}. This system receives information from an input signal that modulates the birth rate ฯ(t)\rho(t). For simplicity, we assume it is given by

ฯ(t)=ฯ0s(t)(3.34)\rho(t)=\rho_0 s(t) \qquad(3.34)

where ฯ0\rho_0 is a constant and s(t)s(t) is the input copy number at time tt. This is a simple model for gene expression, where the rate of production of a protein X\mathrm{X} is controlled by a transcription factor S\mathrm{S}, and X\mathrm{X} itself has a characteristic decay rate. The input trajectories s(t)s(t) themselves are generated via a separate birth-death process โˆ…โ‡ŒS\emptyset\rightleftharpoons\mathrm{S} with production rate ฮบ\kappa and decay rate ฮป\lambda.

We compute the trajectory mutual information for this system as a function of the trajectory duration TT of the input and output trajectories. For Tโ†’โˆžT\rightarrow\infty, the trajectory mutual information is expected to increase linearly with TT, since, on average, every additional output segment contains the same additional amount of information on the input trajectory. Because we are interested in the mutual information in steady state, the initial states (s0,x0)(s_0,x_0) were drawn from the stationary distribution P(s0,x0)\mathrm{P}(s_0,x_0). This distribution was obtained using a Gaussian approximation. This does not influence the asymptotic rate of increase of the mutual information, but leads to a nonzero mutual information already for T=0T=0.

Figure 3.2: Comparing different schemes to compute the mutual information as a function of trajectory duration for a simple coupled birth-death process with rates ฮบ=50,ฮป=1,ฯ0=10,ฮผ=10\kappa = 50, \lambda=1, \rho_0=10, \mu = 10 and steady-state initial condition. The top panels show example trajectories of input and output as well as the mean (solid line) and standard deviation (shaded region). Below, the mutual information is shown as a function of trajectory duration. The inset shows an enlarged version of the dotted rectangle near the origin. For short trajectories all PWS estimates agree. Yet, for longer trajectories, DPWS and TI-PWS require a much larger number of input trajectories MM for computing ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] than RR-PWS to converge. Results for the three PWS variants are compared with the  [10] estimate, and with the linear noise approximation from Ref.  [9]. We find excellent agreement between the Duso scheme and RR-PWS. The Gaussian linear noise approximation systematically underestimates the mutual information. All PWS estimates, as well as the Duso approximation were computed using N=104N=10^4 samples from ๐’ซ[๐’”,๐’™]\mathcal{P}[\mathbfit{s},\mathbfit{x}].

Figure 3.2 shows the mutual information as a function of the trajectory duration TT. We compare the three PWS variants and two approximate schemes. One is that of Duso and Zechner  [10]. To apply it, we used the code publicly provided by the authors4, and to avoid making modifications to this code, we chose a fixed initial condition (s0=x0=50)(s_0=x_0=50) which causes the mutual information to be zero for T=0T=0. The figure also shows the analytical result of a Gaussian model  [9], obtained using the linear-noise approximation (see Section 3.7.1).

We find that the efficiency of the respective PWS variants depends on the duration of the input-output trajectories. For short trajectories all PWS variants yield very similar estimates for the mutual information. However, for longer trajectories the estimates of DPWS and, to a smaller degree, TI-PWS diverge, because of poor sampling of the trajectory space in the estimate of ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. For longer trajectories, the estimate becomes increasingly dominated by rare trajectories, which make an exceptionally large contribution to the average of ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. Missing these rare trajectories with a high weight tends to increase the marginal entropy H(๐’ณ)\mathrm{H}(\mathcal{X}) [see Equation 2.8], and thereby the mutual information; indeed, the estimates of DPWS and TI-PWS are higher than that of RR-PWS. For brute-force DPWS, the error decreases as we increase the number MM of input trajectories per output trajectory used to estimate ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. Similarly, for TI-PWS the error decreases as we use more MCMC samples for the marginalization scheme. For the RR-PWS, however, already for M=128M=128 the estimate has converged; we verified that a further increase of MM does not change the results.

We also find excellent agreement between the RR-PWS estimate and the approximate result of Duso and Zechner  [10]. Only very small deviations are visible in Figure 3.2. These deviations are mostly caused by the different choice for the initial conditions. In RR-PWS, the initial conditions are drawn from the stationary distribution, while in the Duso scheme they are fixed, such that the mutual information computed with RR-PWS is finite while that computed with the Duso scheme is zero. Yet, as the trajectory duration TT increases, the Duso estimate slowly โ€œcatches upโ€ with the RR-PWS result.

Figure 3.2 also shows that although the Gaussian model matches the PWS result for T=0T=0, it systematically underestimates the mutual information for trajectories of finite duration T>0T>0. Interestingly, this is not a consequence of small copy-number fluctuations: increasing the average copy number does not significantly improve the Gaussian estimate.5

Figure 3.3: Comparing estimation bias for the different PWS variants in relation to their CPU time requirements. Each dot represents a single mutual information estimate with N=104N=10^4 samples for output trajectories of duration T=5T=5. Almost all the CPU time of a PWS estimate is spent on the computation of the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. The bias of the marginal probability estimate can be reduced by using a larger number MM of sampled input trajectories to compute the marginalization integral, which also increases the required CPU time. The RR-PWS estimate converges much faster than the estimate of DPWS and TI-PWS. For DPWS and TI-PWS, the dots represents estimates ranging from M=25M=2^5 to M=214M=2^{14}, for RR-PWS ranging from M=23M=2^3 to M=210M=2^{10}. As the baseline of zero bias we use the converged result from the RR-PWS estimates.

The different approaches for computing the marginal probability ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}] lead to different computational efficiencies of the respective PWS schemes. In Figure 3.3, as a benchmark, we show the magnitude of the error of the different PWS estimates in relation to the required CPU time. Indeed, as expected, the computation of the marginal probability poses problems for long trajectories when using the brute force DPWS scheme. More interestingly, while TI-PWS improves the estimate of the mutual information, the improvement is not dramatic. Unlike the brute-force scheme, thermodynamic integration does make it possible to generate input trajectories ๐’”\mathbfit{s} that are correlated with the output trajectories ๐’™\mathbfit{x}, but it still overestimates the mutual information for long trajectories unless a very large number of MCMC samples are used.

The RR-PWS implementation evidently outperforms the other estimates for this system. The regular resampling steps ensure that we mostly sample input trajectories ๐’”\mathbfit{s} with non-vanishing likelihood ๐’ซ[๐’™|๐’”]\mathcal{P}[\mathbfit{x}|\mathbfit{s}], thereby avoiding the sampling problem from DPWS. Moreover, sequential Monte Carlo techniques such as RR-PWS and FFS  [15] have a considerable advantage over MCMC techniques in trajectory sampling. With MCMC path sampling, we frequently make small changes to an existing trajectory such that the system moves slowly in path space, leading to poor statistics. In contrast, in RR-PWS we generate new trajectories from scratch, segment by segment, and these explore the trajectory space much faster.

3.6 Discussion

Aside from Direct PWS introduced in Chapter 2, we developed two additional variants of PWS, capitalizing on the connection between information theory and statistical physics. Specifically, the computation of the mutual information requires the evaluation of the marginal probability of individual output trajectories ๐’ซ[๐’™]\mathcal{P}[\mathbfit{x}]. This corresponds to the computation of a partition function in statistical physics, ๐’ซ[๐’™]=โˆซ๐’ซ[๐’”]๐’ซ[๐’”]๐’ซ[๐’™|๐’”]\mathcal{P}[\mathbfit{x}]=\int \mathcal{P}[\mathbfit{s}]\,\mathcal{P}[\mathbfit{s}]\mathcal{P}[\mathbfit{x}|\mathbfit{s}]. RR-PWS and TI-PWS are based on techniques from polymer and rare-event simulations to make the computation of the marginal trajectory probability more efficient.

The different PWS variants share some characteristics yet also differ in others. DPWS and RR-PWS are static Monte Carlo schemes in which the trajectories are generated independently from the previous ones. These methods are similar to static polymer sampling schemes like PERM  [3] and rare-event methods like DFFS or BG-FFS  [15]. In contrast, TI-PWS is a dynamic Monte Carlo scheme, where a new trajectory is generated from the previous trajectory. In this regard, this method is similar to the CBMC scheme for polymer simulations  [35] and the TPS  [8], TIS  [36], and RB-FFS  [15] schemes to harvest transition paths. The benefit of static schemes is that the newly generated trajectories are uncorrelated from the previous ones, which means that they are less likely to get stuck in certain regions of path space. Concomitantly, they tend to diffuse faster through the configuration space. Indeed, TI-PWS suffers from a problem that is also often encountered in TPS or TIS, which is that the middle sections of the trajectories move only slowly in their perpendicular direction. Tricks that have been applied to TPS and TIS to solve this problem, such as parallel tempering, could also be of use here  [37].

Another distinction is that RR-PWS generates all the trajectories in the ensemble simultaneously yet segment by segment, like DFFS, while DPWS and TI-PWS generate only one full trajectory at the time, similar to RB-FFS, BG-FFS, and also TPS and TIS. Consequently, RR-PWS, like DFFS, faces the risk of genetic drift, which means that, after sufficiently many resampling steps, most paths of the ensemble will originate from the same initial seed. Thus, when continuing to sample new segments, the old segments that are far in the past become essentially fixed, which makes it possible to miss important paths in the RR-PWS sampling procedure. As in DFFS, the risk of genetic drift in RR-PWS can be mitigated by increasing the initial number of path segments. Although we did not employ this trick here, we found that RR-PWS was by far the most powerful scheme of the three variants studied.

Nonetheless, we expect that DPWS and TI-PWS become more efficient in systems that respond to the input signal with a significant delay ฯ„\tau. In these cases, the weight of a particular output trajectory depends on the degree to which the dynamics of the output trajectory correlates with the dynamics of the intput trajectory a time ฯ„\tau earlier. Because in RR-PWS a new segment of an output trajectory is generated based on the corresponding segment of the input trajectory that spans the same time-interval, it may therefore miss these correlations between the dynamics of the output and that of the input a time ฯ„\tau earlier. In contrast, DPWS and TI-PWS generate full trajectories one at the time, and are therefore more likely to capture these correlations. Also the machine-learning based approach for determining the optimal importance sampling distribution q[๐’”|๐’™]q[\mathbfit{s}|\mathbfit{x}] presented in Chapter 6 (Section 6.2.4) is likely to prove useful in these scenarios with complex temporal dependences between the input and output trajectories.

3.7 Supplementary Information

3.7.1 Gaussian Approximation of the Linear System

We derive the Gaussian approximation of the simple reaction system used in Section 3.5. We recall the elementary biochemical reaction motif consisting of four reactions

0 โ†’ S,S โ†’ 0,S โ†’ S + X,X โ†’ 0(3.35)\text{0 โ†’ S}, \quad \text{S โ†’ 0}, \quad \text{S โ†’ S + X}, \quad \text{X โ†’ 0} \label{eq-reaction-scheme} \qquad(3.35)

with input SS and output XX. This reaction motif is a simple model for gene expression, where the rate of production of a protein XX is controlled by a transcription factor SS, and X itself has a characteristic decay rate. The dynamics of SS are given by a constant birth rate and a constant (per-molecule) decay rate.

We compute the covariance functions of this model which are then used to derive Gaussian signal statistics, and allow us to compute the Gaussian information rate. Specifically, we assume that the process is sampled at a sampling rate ฮฝ\nu, with S1,โ€ฆ,SnS_1,\ldots,S_n being the sequence of sampled inputs and X1,โ€ฆ,XnX_1,\ldots,X_n being the sequence of outputs in time. We can describe the dynamics of the input and output as fluctuations around the mean, i.e. we write ฮดSi=Siโˆ’โŸจSiโŸฉ\delta S_i = S_i - \langle S_i \rangle and ฮดXi=Xiโˆ’โŸจXiโŸฉ\delta X_i = X_i - \langle X_i \rangle. In the limit of large copy numbers, due to the central limit theorem, the distribution of these fluctuations become Gaussian  [38].

In particular, let ๐’=(ฮดS1,โ€ฆ,ฮดSn,ฮดX1,โ€ฆ,ฮดXn)T\mathbfit{Z}=(\delta S_1,\ldots,\delta S_n, \delta X_1,\ldots,\delta X_n)^T be the concatenation of the input and output sequences. Then, the distribution of ๐’\mathbfit{Z} is multivariate normal, i.e.,

P(๐’=๐’›)=1(2ฯ€)2n|ฮฃ|eโˆ’12(๐’›Tฮฃโˆ’1๐’›)(3.36)\mathrm{P}(\mathbfit{Z}=\mathbfit{z}) = \frac{1}{\sqrt{(2\pi)^{2n}|\Sigma|}} e^{-\frac{1}{2}(\mathbfit{z}^T \Sigma^{-1} \mathbfit{z})} \qquad(3.36)

where the covariance Matrix ฮฃโˆˆโ„2nร—2n\Sigma\in\mathbb{R}^{2n\times 2n} has the following block structure

ฮฃ=(ฮฃSSฮฃXSฮฃSXฮฃXX).(3.37)\Sigma = \left( \begin{array}{cc} \Sigma_{SS} & \Sigma_{XS} \\ \Sigma_{SX} & \Sigma_{XX} \end{array} \right)\,. \label{eq-cov_z} \qquad(3.37)

Here ฮฃSS\Sigma_{SS} and ฮฃXX\Sigma_{XX} are the (auto-)covariance matrices of the input and the output, respectively, whereas ฮฃSX=ฮฃXST\Sigma_{SX}=\Sigma^T_{XS} contain the cross-covariances. The matrix elements are given by ฮฃABij=โŸจฮดAiฮดBjโŸฉ=CAB(tiโˆ’tj)\Sigma^{ij}_{AB} = \langle \delta A_i \delta B_j \rangle = C_{AB}(t_i - t_j) where CAB(t)C_{AB}(t) denote the (cross-)covariance functions and t1,โ€ฆ,tnt_1,\ldots,t_n are the sampling times. Thus, the full statistics of the trajectories are determined from the (cross-)covariance functions.

Since the reaction scheme in Equation 3.35 features only first-order reactions, the covariance functions can be calculated explicitly using the regression theorem  [38,39]. For tโ‰ฅ0t\geq 0, we obtain the following expressions for the covariance functions:

CSS(t)=ฯƒSS2exp(โˆ’ฮปt)CSX(t)=ฯฯƒSS2texprel[(ฮปโˆ’ฮผ)t]exp(โˆ’ฮปt)+ฯƒSX2exp(โˆ’ฮผt)CXS(t)=ฯƒSX2exp(โˆ’ฮปt)CXX(t)=ฯฯƒSX2texprel[(ฮปโˆ’ฮผ)t]exp(โˆ’ฮปt)+ฯƒXX2exp(โˆ’ฮผt).(3.38)\begin{aligned} \label{eq-c_ss} C_{SS}(t) &= \sigma^2_{SS} \exp(-\lambda t) \\ \label{eq-c_sx} C_{SX}(t) &= \rho\sigma^2_{SS}t\ \mathrm{exprel}[(\lambda - \mu) t]\ \exp(-\lambda t) + \sigma^2_{SX} \exp(-\mu t) \\ \label{eq-c_xs} C_{XS}(t) &= \sigma^2_{SX} \exp(-\lambda t) \\ \label{eq-c_xx} C_{XX}(t) &= \rho \sigma^2_{SX} t\ \mathrm{exprel}[(\lambda - \mu) t]\ \exp(-\lambda t) + \sigma^2_{XX} \exp(-\mu t)\,. \end{aligned} \qquad(3.38)

In the expressions above we used the relative exponential function

exprel(x)={exp(x)โˆ’1x,if xโ‰ 01,if x=0(3.39)\mathrm{exprel}(x) = \begin{cases} \frac{{\exp(x) - 1}}{{x}}, & \text{if } x \neq 0 \\ 1, & \text{if } x = 0 \end{cases} \qquad(3.39)

and the point (co-)variances

ฯƒSS2=ฮบฮปฯƒSX2=ฯฯƒSS2ฮป+ฮผฯƒXX2=ฯฮผ(ฯƒSS2+ฯƒSX2).(3.40)\begin{aligned} \sigma^2_{SS} &= \frac{\kappa}{\lambda} \\ \sigma^2_{SX} &= \frac{\rho \sigma^2_{SS}}{\lambda + \mu} \\ \sigma^2_{XX} &= \frac{\rho}{\mu} (\sigma^2_{SS} + \sigma^2_{SX})\,. \end{aligned} \qquad(3.40)

Because the process is stationary, the values of the covariance functions for t<0t<0 can be obtained by applying the symmetry relation CAB(t)=CBA(โˆ’t)C_{AB}(t) = C_{BA}(-t).

Now, we can directly compute the mutual information from the covariances. Using Equation 3.38, ?eq-c_sx, ?eq-c_xs, ?eq-c_xx we obtain the matrix elements of the joint covariance matrix ฮฃ\Sigma defined in Equation 3.37 and then the mutual information is given by the expression

I(๐’ฎ,๐’ณ)=12ln(|ฮฃSS||ฮฃXX||ฮฃ|).(3.41)I(\mathcal{S}, \mathcal{X}) = \frac{1}{2} \ln \left( \frac{|\Sigma_{SS}||\Sigma_{XX}|}{|\Sigma|} \right). \qquad(3.41)

Note, that for discretized trajectories of length nn, the matrix ฮฃ\Sigma has dimensions of 2nร—2n2n\times 2n. Thus, the computation of the trajectory mutual information requires the computation of a 2nร—2n2n\times 2n matrix, which can be computationally challenging for long trajectories (large nn). In ?sec-notes-gaussian of this thesis, we discuss some techniques to considerably accelerate the computation of the mutual information.

Using spectral analysis, Tostevin and Wolde  [40] were able to derive an analytical expression for the Gaussian mutual information rate of this model in the continuous-time limit, given by

R(๐’ฎ,๐’ณ)=ฮป2[1+ฯฮปโˆ’1].(3.42)R(\mathcal{S}, \mathcal{X}) = \frac{\lambda}{2}\left[ \sqrt{1 + \frac{\rho}{\lambda}} - 1 \right] \,. \label{eq-spectral_tostevin_gaussian} \qquad(3.42)

The information rate of the discretely sampled process converges to this value as the sampling rate approaches infinity  [40]. Notably however, the model corresponding to Equation 3.42 is a continuum description that assumes Gaussian statistics; indeed, this rate deviates from the mutual information rate of the exact model that is described by the chemical master equation, even in the limit of large copy numbers  [41]. This finding is also further discussed in Chapter 5 (Section 5.3.1).

References

[1]
M. Reinhardt, G. Tkaฤik, and P. R. ten Wolde, Path Weight Sampling: Exact Monte Carlo Computation of the Mutual Information between Stochastic Trajectories, Physical Review X 13, 041017 (2023).
[2]
[3]
[4]
D. Frenkel and B. Smit, Understanding Molecular Simulation, 2nd ed. (Academic Press, San Diego, 2002).
[5]
[6]
A. Gelman and X.-L. Meng, Simulating normalizing constants: from importance sampling to bridge sampling to path sampling, Statistical Science 13, 163 (1998).
[7]
R. M. Neal, Annealed importance sampling, Statistics and Computing 11, 125 (2001).
[8]
P. G. Bolhuis, D. Chandler, C. Dellago, and P. L. Geissler, TRANSITION PATH SAMPLING: Throwing Ropes Over Rough Mountain Passes, in the Dark, Annual Review of Physical Chemistry 53, 291 (2002).
[9]
F. Tostevin and P. R. ten Wolde, Mutual Information between Input and Output Trajectories of Biochemical Networks, Physical Review Letters 102, 218101 (2009).
[10]
L. Duso and C. Zechner, Path Mutual Information for a Class of Biochemical Reaction Networks, in 2019 IEEE 58th Conference on Decision and Control (CDC) (2019), pp. 6610โ€“6615.
[11]
M. Mรผller and W. Paul, Measuring the chemical potential of polymer solutions and melts in computer simulations, The Journal of Chemical Physics 100, 719 (1994).
[12]
M. N. Rosenbluth and A. W. Rosenbluth, Monte Carlo Calculation of the Average Extension of Molecular Chains, The Journal of Chemical Physics 23, 356 (1955).
[13]
N. J. Gordon, D. J. Salmond, and A. F. M. Smith, Novel approach to nonlinear/non-Gaussian Bayesian state estimation, IEE Proceedings F Radar and Signal Processing 140, 107 (1993).
[14]
T. Prellberg and J. Krawczyk, Flat Histogram Version of the Pruned and Enriched Rosenbluth Method, Physical Review Letters 92, 120602 (2004).
[15]
R. J. Allen, D. Frenkel, and P. R. ten Wolde, Simulating rare events in equilibrium or nonequilibrium stochastic systems, The Journal of Chemical Physics 124, 024102 (2006).
[16]
N. B. Becker, R. J. Allen, and P. R. ten Wolde, Non-stationary forward flux sampling, The Journal of Chemical Physics 136, 174118 (2012).
[17]
P. Del Moral, Nonlinear filtering: Interacting particle resolution, Comptes Rendus de lโ€™Acadรฉmie Des Sciences - Series I - Mathematics 325, 653 (1997).
[18]
A. F. M. Smith and A. E. Gelfand, Bayesian Statistics without Tears: A Sampling-Resampling Perspective, The American Statistician 46, 84 (1992).
[19]
L. Martino, V. Elvira, and F. Louzada, Effective sample size for importance sampling based on discrepancy measures, Signal Processing 131, 386 (2017).
[20]
A. Doucet and A. M. Johansen, A tutorial on particle filtering and smoothing: fifteen years later, in Oxford Handbook of Nonlinear Filtering, edited by D. Crisan and B. Rozovskii (Oxford University Press, 2011).
[21]
N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of State Calculations by Fast Computing Machines, The Journal of Chemical Physics 21, 1087 (1953).
[22]
C. Dellago, P. G. Bolhuis, and D. Chandler, Efficient transition path sampling: Application to Lennard-Jones cluster rearrangements, The Journal of Chemical Physics 108, 9236 (1998).
[23]
[24]
F. van der Meulen and M. Schauer, Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals, Electronic Journal of Statistics 11, 2358 (2017).
[25]
A. Golightly and D. J. Wilkinson, Bayesian inference for Markov jump processes with informative observations, Statistical Applications in Genetics and Molecular Biology 14, 169 (2015).
[26]
C. S. Gillespie and A. Golightly, Guided proposals for efficient weighted stochastic simulation, The Journal of Chemical Physics 150, 224103 (2019).
[27]
G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Physical Review E 61, 2361 (2000).
[28]
C. Andrieu, A. Doucet, and R. Holenstein, Particle Markov chain Monte Carlo methods, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72, 269 (2010).
[29]
V. Rao and Y. W. Teh, Fast MCMC sampling for Markov jump processes and extensions, Journal of Machine Learning Research 14, 3295 (2013).
[30]
J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, Julia: A Fresh Approach to Numerical Computing, SIAM Review 59, 65 (2017).
[31]
M. Reinhardt, PathWeightSampling.jl (v0.1.0), (2021).
[32]
M. Reinhardt, manuel-rhdt/PathWeightSampling.jl, (2024).
[33]
[34]
A.-L. Moor, A. Tjalma, M. Reinhardt, P. R. ten Wolde, and C. Zechner, Reaction-Based Information Processing in Biochemical Networks, (unpublished).
[35]
J. I. Siepmann and D. Frenkel, Configurational bias Monte Carlo: a new sampling scheme for flexible chains, Molecular Physics 75, 59 (1992).
[36]
T. S. van Erp, D. Moroni, and P. G. Bolhuis, A novel path sampling method for the calculation of rate constants, The Journal of Chemical Physics 118, 7762 (2003).
[37]
T. J. H. Vlugt and B. Smit, On the efficient sampling of pathways in the transition path ensemble, PhysChemComm 4, 11 (2001).
[38]
C. Gardiner, Handbook of Stochastic Methods, 4th ed. (Springer-Verlag, Berlin Heidelberg, 2009).
[39]
P. B. Warren, S. Tฤƒnase-Nicola, and P. R. ten Wolde, Exact results for noise power spectra in linear biochemical reaction networks, The Journal of Chemical Physics 125, 144904 (2006).
[40]
F. Tostevin and P. R. ten Wolde, Mutual information in time-varying biochemical systems, Phys. Rev. E 81, 061917 (2010).
[41]
A.-L. Moor and C. Zechner, Dynamic information transfer in stochastic biochemical networks, Physical Review Research 5, 013032 (2023).

  1. Indeed, the mutual information I(๐’ฎ,๐’ณ)\mathrm{I}(\mathcal{S}, \mathcal{X}) precisely quantifies how strong the statistical dependence is between the trajectory-valued random variables ๐’ฎ\mathcal{S} and ๐’ณ\mathcal{X}. From its definition I(๐’ฎ,๐’ณ)=H(๐’ฎ)โˆ’H(๐’ฎ|๐’ณ)\mathrm{I}(\mathcal{S}, \mathcal{X})=\mathrm{H}(\mathcal{S}) - \mathrm{H}(\mathcal{S}|\mathcal{X}) we can understand more clearly how this affects the efficiency of the Monte Carlo estimate. Roughly speaking, H(๐’ฎ)\mathrm{H}(\mathcal{S}) is related to the number of distinct trajectories ๐’”\mathbfit{s} that can arise from the dynamics given by ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}], while H(๐’ฎ|๐’ณ)\mathrm{H}(\mathcal{S}|\mathcal{X}) is related to the number of distinct trajectories ๐’”\mathbfit{s} that could have lead to a specific output ๐’™\mathbfit{x}, on average. Therefore, if the mutual information is very large, the difference between these two numbers is very large, and consequently the number of overall distinct trajectories is much larger than the number of distinct trajectories compatible with output ๐’™\mathbfit{x}. Now, if we generate trajectories according to the dynamics given by ๐’ซ[๐’”]\mathcal{P}[\mathbfit{s}], with overwhelming probability we generate a trajectory ๐’”\mathbfit{s} which is not compatible with the output trajectory ๐’™\mathbfit{x}, and therefore ๐’ซ[๐’™|๐’”]โ‰ˆ0\mathcal{P}[\mathbfit{x}|\mathbfit{s}]\approx 0. Hence, the effective number of samples MeffM_\text{eff} is much smaller than the actual number of generated trajectories MM, i.e. Meffโ‰ชMM_\text{eff} \ll M. We therefore only expect the estimate in Equation 2.7 to be reliable when computing the mutual information for systems where it is not too high. Thus, strikingly, the difficulty of computing the mutual information is proportional to the magnitude of the mutual information itself.โ†ฉ๏ธŽ

  2. Note that while the distribution exp(โˆ’๐’ฐ[๐’”,๐’™])/Z[๐’™]\exp(-\mathcal{U}[\mathbfit{s},\mathbfit{x}])/Z[\mathbfit{x}] resembles the equilibrium distribution of a canonical ensemble from statistical mechanics, this does not imply that we are studying systems in thermal equilibrium. Indeed, PWS is used to quantify information transmission in systems driven out of equilibrium by the input signal. Thus, the notation introduced in this section is merely a mathematical reformulation of the marginalization integral to make the analogy to statistical physics apparent. We assign no physical meaning to the potentials and free energies. Also note that we set kBT=1k_{\mathrm{B}}T = 1 since temperature is irrelevant here.โ†ฉ๏ธŽ

  3. The samples generated through resampling are only approximate because they are limited to the discrete set {๐’”1,โ€ฆ,๐’”M}\{\mathbfit{s}^1, \ldots, \mathbfit{s}^M\}, which was originally drawn from the prior distribution f[๐’”]f[\mathbfit{s}]. The resampling process assigns weights based on the target distribution, which are used to select from that set, but it does not generate entirely new samples directly from the target. Therefore, the sampled points do not constitute draws from the target distribution unless Mโ†’โˆžM \rightarrow \infty.โ†ฉ๏ธŽ

  4. https://github.com/zechnerlab/PathMIโ†ฉ๏ธŽ

  5. A detailed analysis of this observation was carried out in a (currently unpublished) collaborative work with Anne-Lena Moor and Christoph Zechner from the MPI-CBG in Dresden  [34]. This work demonstrates that the discrepancy arises because all the information on the input signal is contained in the output speciesโ€™ production process, which is catalyzed by the input, rather than in the decay process of the output, which occurs independently of the input. The PWS result captures this distinction by using a fully discrete approach. In contrast, the Gaussian estimate of the linear-noise approximation misses this distinction because in the noise term for the output the contributions from the production and decay reactions are added together. We also comment further on this matter in Chapter 5.โ†ฉ๏ธŽ