Parallel and interacting stochastic approximation annealing algorithms for global optimisation

We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.


Introduction
There is a continuous need for development of efficient algorithms to tackle mathematical optimisation problems often met in several fields of science.For instance, in computational chemistry, predicting the native conformation of a protein can be performed by minimising its potential energy.In classical or Bayesian statistics, inference can be performed by maximising the likelihood function (a statistical model assumed to have generated an observed data set) (Casella and Berger, 1990) or the associated posterior distribution density (a distribution that reflects the researcher's belief in the unknown quantities of interest) (Robert, 2007), correspondingly.
We assume that there is interest in minimising a function U pxq, called cost function, defined on a space X Ă R d ; i.e. we seek px ˚, U px ˚qq such that x ˚" arg min @xPX U pxq. Hereafter, we will discuss in terms of minimisation because maximisation of U pxq can be performed equivalently by minimising the function Ũ pxq :" ´U pxq.Several stochastic optimisation algorithms have been proposed in the literature, e.g.simulated annealing (SA) (Kirkpatrick et al., 1983;Metropolis et al., 1953), genetic algorithm (Goldberg, 1989;Holland, 1975), annealing stochastic approximation Monte Carlo (ASAMC ) (Liang, 2007), annealing evolutionary stochastic approximation Monte Carlo (AESAMC) (Liang, 2011), stochastic approximation annealing (SAA) (Liang et al., 2014).Albeit their success, they encounter various difficulties in converging to the global minimum, an issue that becomes more severe when U p¨q is highly rugged or high dimensional.
Simulated annealing (SA) (Kirkpatrick et al., 1983;Černỳ, 1985) aims at finding the global minimum based on the fact that minimisation of U pxq can be addressed in statistical terms by simulating the Boltzmann distribution f τ˚p xq, with density f τ˚p xq9 expp´1 τ˚U pxqq, at a small value of temperature parameter τ ˚ą 0 close to 0. SA considers a temperature ladder tτ t u that is a monotonically decreasing sequence of temperatures with τ 1 reasonably large.A standard version of SA involves simulating consecutively from a sequence of Boltzmann distributions tf τt pxq; t " 0, 1, ...u, parametrised by the temperature ladder, via Metropolis-Hastings MCMC updates (Hastings, 1970;Metropolis et al., 1953).A standard version of SA is presented in Algorithm 1 as a pseudo-code.At early iterations, the algorithm aims at escaping from the attraction of local minima by flattening f τt pxq through τ t .During the subsequent iterations, τ t decreases progressively towards 0, and hence the values simulated from f τt pxq concentrate in a narrower and narrower neighbourhood of the global mode of f τt pxq (or equiv.the global minimum of U pxq).In theory, convergence of SA to the global minimum can be ensured with probability 1 if a logarithmic cooling schedule Op1{ logptqq is adopted (Geman and Geman, 1984;Haario and Saksman, 1991), however this rate is too slow to be implemented in practice because it requires an extremely long CPU time.In practice, linear or geometric cooling schedules are used, however they do not guarantee convergence to the global minimum, and hence the algorithm tends to become trapped to local minima in complex scenarios.
The stochastic approximation annealing (SAA) (Liang et al., 2014) is a stochastic optimisation algorithm Algorithm 1 Simulated annealing algorithm used to detect the minimum of a cost function U pxq, x P X Requires : Seed x 0 P X , temperature ladder tτ t u, density f τt pxq9 expp´1 τt U pxqq.
Iterate : For t " 1, ..., T , For n t iterations repeat simulating f τt p¨q by using a Metropolis-Hastings algorithm: 1. Propose x 1 " Qpd ¨|xq, where Qpd ¨|¨q is a proposal distribution that can be sampled directly.
that builds upon the SA and SAMC 1 ideas.It involves simulating a time-inhomogeneous Markov chain via MCMC transitions targeting a sequence of modified Boltzmann distributions whose densities adaptively adjust via a stochastic approximation mechanism (inherited by SAMC).Each distribution of the sequence is biased according to a partitioning scheme (inherited by SAMC) and parametrised by a temperature ladder (inherited by SA).SAA aims at gradually forcing sampling toward the local minima of each subregion of the partition through lowering the temperature with iterations, while it ensures that each subregion is visited by the chain according to a predetermined frequency.This strategy shrinks the sampling space in a soft manner and enables SAA to escape from local traps.The global minimum is guaranteed to be reached as the temperature tends to 0 if the temperature ladder uses a square root cooling schedule Op1{ ?tq (Liang et al., 2014).We emphasise that, compared to SA, SAA ensures convergence to global minimum at a much faster cooling schedule (square-root).In spite of these appealing features, the performance of SAA crucially depends on the efficiency of the self-adjusting mechanism and the exploration of the sampling space involved.
In scenarios that the cost function is rugged or high-dimensional, the exploration of the sampling space can be slow because it is performed by a single Markov chain.Moreover, the information obtained to support the self-adjusting process is limited which makes the adjustment of the target density quite unstable and too slow to convergence.When the target distribution is poorly adjusted, the convergence of the whole algorithm to the global minimum decays severely, and the chain may be trapped in local minima.This problematic behaviour can downgrade severely the overall performance of SAA, or even cause local trapping, in complex optimisation problems.
1 SAMC (Liang et al., 2007(Liang et al., , 2010;;Wu and Liang, 2011;Bornn et al., 2013;Song et al., 2014) is an adaptive MCMC sampler that aims at addressing the local mode trapping problem that standard MCMC samplers encounter.It is a generalisation of the Wang-Landau algorithm (Wang and Landau, 2001) but equipped with a stochastic approximation scheme (Robbins and Monro, 1951) that adjusts the target distribution.It involves generating a time-inhomogeneous Markov chain that targets a biased distribution, adjusted as the iterations evolve, instead of the distribution of interest itself.The biased distribution is parametrised by a partition scheme and designed such that the generated chain equally visits each subregion of the partition with a predetermined frequency as the iterations evolve.For an overview see (Liang, 2014).
Essentially, PISAA works on a population of SAA chains that interact each other in a manner that eliminates the aforementioned problematic behaviour of SAA, and accelerates the overall convergence.This allows the proposed algorithm to demonstrate great performance, and address challenging optimisation problems with high-dimensional and very rugged cost functions.PISAA is enabled to use advanced MCMC transitions that incorporate crossover operations.These operations allow the distributed information across chains of the population to be used in guiding further simulations, and therefore lead to a more efficient exploration of the sampling space.Furthermore, PISAA is equipped with a more accurate and stable self-adjusting mechanism for the target density, that uses information gained from the whole population, and therefore accelerates the overall convergence of the algorithm to the global minimum.The use of multiple chains allows PISAA to initialise from various locations and search for the global minimum at different regions of the sampling space simultaneously.PISAA can be implemented in parallel, if parallel computing environment is available, and hence the computational overhead due to the generation of multiple chains can be reduced dramatically.It is worth emphasising that PISAA is not just an implementation of the SAA running in parallel; its key feature is the way the parallel chains interact in order to overcome the aforesaid problematic behaviour and improve performance.
Our numerical examples suggest that the performance of PISAA improves with the size of the population.Also, in problems where the cost function is rugged or high-dimensional, PISAA significantly outperforms other competitors, SA, ASAMC, and SAA, and their population analogues, VFSA, AESAMC, as it was able to discover the global minimum much quicker.
The layout of the article is as follows.In Section 2, we give a brief review of SAA and discuss problems concerning the efficiency of the algorithm; in Section 3, we present the proposed algorithm PISAA; in Section 4, we examine the performance of the proposed algorithm and compare it with those of other stochastic optimisation algorithms (such as SA, ASAMC, AESAMC, and SAA) against challenging optimisation problems; and in Section 5, we conclude.

Stochastic approximation annealing: A review
Stochastic approximation annealing (SAA) algorithm (Liang et al., 2014) casts the optimisation problem in a combined framework of SAMC and SA, in the sense that the variant distribution is self-adjusted and parametrised by a sampling space partition and temperature ladder.
Let E " tE j ; j " 1, ..., mu be a partition of the sampling space X with subregions E 1 " px P X : ´8 ă U pxq ď u 1 q, ..., E j " px P X : u j´1 ă U pxq ď u j q, ..., E m " px P X : u m´1 ă U pxq ă 8q, and grid tu j ; u j P R, j " 1 : m ´1u, for m ą 1. SAA aims at drawing samples from each subregion with a pre-specified frequency.Let π :" pπ j ; j " 1, ..., mq, such that π j " Prpx P E j q, π j ą 0 and ř m j"1 π j " 1, denote the vector of desired sampling frequencies of the m subregions tE j u.We refer to tπ j u as the desired probability.How to choose the partition scheme E for the sampling space or the desired probability tπ j u are problem dependent.SAA seeks to draw samples from the modified Boltzmann distribution with density τ ˚U pxqq1px P E j q; (2.1) at a low temperature value τ ˚, where w ˚:" pw pjq ˚; j " 1 : mq, w pjq ˚" ş Ej expp´1 τ˚U pxqqdx ă 8 are called bias weights, and θ pjq ˚is such that exppθ pjq ˚q9w pjq ˚{π j , for j " 1, ..., m.The rational behind SAA is that, if tθ ˚u were known, sampling from (2.1) could lead to a random walk in the space of subregions (by regarding each subregion as a point) with each subregion being sampled with frequency proportional to tπ j u.Ideally, this can ensure that the lowest energy subregion can be reached by SAA in a long enough run and thus samples can be drawn from the neighbourhood of the global minimum when τ ˚is close to 0.
Since tw pjq ˚u are generally unknown, in order to simultaneously approximate these values and perform sampling, SAA is equipped with an adaptive MCMC scheme that combines SAMC and SA algorithms.Let tγ t ; t " 1, ...u denote the gain factor, in terms of SAMC algorithm, that is a deterministic, positive, and non-increasing sequence such as γ t " t 0 {t β with β P p0.5, 1s.Let tτ t u denote a temperature ladder, in terms of SA algorithm, that is a deterministic, positive and non-increasing sequence such as τ t " t 1 { ?t `τ˚w ith t 1 ą 0, and τ ˚ą 0 very small.We consider a sequence θ t :" pθ pjq t , j " 1 : mq, as a working estimator of tθ ˚u, where θ t P Θ and Θ Ď R m is a compact set, e.g.Θ " r10 ´10 , 10 10 s m .A truncation mechanism is also considered in order to ensure that tθ t u remains in compact set Θ. We define tM c ; c " 1, ...u as a positive, increasing sequence of truncation bounds for tθ t u, and tc t u as the total number of truncations until iteration t.
SAA algorithm proceeds as a recursion which consists of three steps, at iteration t: The sampling update, where a sample x t is simulated from a Markov chain transition probabilities P θt´1,τt px t , d¨; Eq (e.g. a Metropolis-Hastings kernel) with invariant distribution f θt´1,τt pd¨; Eq; the weight update, where the unknown bias weights of the target density are approximated through a self-adjusting mechanism; and the truncation step, where tθ t u is ensured to be in a compact set of Θ.Given the notation above, SAA is presented as a pseudo-code in Algorithm 2.
Note that additive transformations of tθ t u leave f θt´1,τt p¨; Eq invariant.Therefore, it is possible to apply a θ-normilisation step at the end of the run, such that θn Ð θ n `z, where ř m j"1 exppθ pjq n `zq " Z, and Z is a pre-specified constant, e.g.z " p´logp ř m j"1 exppθ pjq n qq; j " 1 : mq for Z " 1. Appropriate conditions under which SAA is a valid adaptive MCMC algorithm that converges to the global minimum are reported in detail in (Conditions A1-A3 in Liang et al., 2014).
SAA presents a number of appealing features when employed to minimise complex systems with rugged cost functions.SAA can work with an affordable square-root cooling schedule Op1{ ?tq for tτ t u, which guarantees the global minimum to be reached as the temperature tends to τ ˚« 0, lim tÑ8 τ t " τ ˚.It is able to locate the minima of each subregion simultaneously (including the global minimum), after a long run, if τ ˚is close to 0 (Corollary 3.1 in Liang et al., 2014).It is worth mentioning that the square-root rate is much faster than the logarithmic rate that guarantees convergence in the SA algorithm.SAA gradually forces sampling toward the local minima of each subregion of the partition through lowering the temperature with iterations while it ensures that each subregion is visited by the chain according to the predetermined frequency tπ j u; this reduces the risk of getting trapped into local minima.
The superiority of SAA is subject to its self-adjusting mechanism that operates based on the past samples in order to estimate the unknown tθ ˚}.This remarkable mechanism, which distinguishes SAA from SA, proceeds as follows: Given that the current state of the Markov chain is at the subregion E j and that a proposal has been made to jump to subregion E j 1 , if the proposal is rejected during the sampling update, the working value θ pj 1 q t will be adjusted to increase during the weight update and make it easier to be accepted in the next iteration; if otherwise, θ pj 1 q t will be adjusted to decrease during the weight update step and make it Parallel and Interacting Stochastic Approximation Annealing algorithms for global optimisation Georgios Karagiannis, Bledar A. Konomi, Guang Lin, and Faming Liang harder to be accepted in the next iteration.Essentially, it penalises the over-visited subregions and rewards the under-visited subregions, and hence makes easier for the system to escape from local traps.This striking mechanism makes the algorithm appealing to address optimisation problems with rugged cost functions.
Although SAA can be quite effective, its success depends crucially on whether the unknown bias weights tθ t u can be estimated accurately enough through the adjustment process, and whether the Markov chain, generated through the sampling step, can explore the sampling space adequately.In complex problems where the ruggedness or the dimensionality of the cost function are high, the convergence of tθ t u is usually slow; an issue that significantly downgrades the overall performance of SAA.The reason is that, at each iteration, the self-adjusting process relies on limited information obtained based on a single draw from the sampling This problematic behaviour can downgrade severely the ability of SAA to discover the global minumun in challenging optimisation problems.
Because SAA presents appealing properties, it is of great importance to design an improved algorithm that inherits the aforementioned desired features and eliminates the aforementioned problematic behaviour of SAA.

Parallel and interacting stochastic approximation annealing
The parallel and interacting stochastic approximation annealing (PISAA) builds on the main principles of SAA (Liang et al., 2014) and the ideas of population MC (Song et al., 2014;Bornn et al., 2013).It works on a population of parallel SAA chains that interact each other appropriately in order to facilitate the the search for the global minimum by improving the self-adjusting mechanism and the exploration of the sampling space.In what follows, we use the notation introduced in Section 2.
3.1 The procedure PISAA works with a population of samples at each iteration.At iteration t, let x p1:κq t :" px piq t ; i " 1 : κq denote the population of samples (abbr.population) which is defined on the population sample space X κ :" X ˆ. . .ˆX .We refer to x piq t as population individual and assume that x piq t P X , for i " 1, ..., κ, where X P R d is called marginal sample space.The total number of population individuals κ ě 1 is called population size.
Parallel and Interacting Stochastic Approximation Annealing algorithms for global optimisation Georgios Karagiannis, Bledar A. Konomi, Guang Lin, and Faming Liang We assume that the whole population shares the same common partition scheme E " tE j ; j " 1 : mu with subregions tE j u defined according to a grid tu j ; u j P R, j " 1 : m ´1u, as in Section 2. For each individual, PISAA aims at drawing samples from each subregion tE j u with a desired probability π :" pπ j ; j " 1, ..., mq defined as in Section 2. Thus, under these specifications, we define a population modified Boltzmann distribution with density f pκq θ˚,τ˚p x p1:κq ; Eq " κ ź i"1 f θ˚,τ˚p x piq ; Eq; (3.1) where tw pjq ˚u, and tθ pjq ˚u are defined as in Section 2. Note that, the individuals x piq of the population x p1:κq are independent and identically distributed (i.i.d.) such that each individual x piq has marginal distribution f θ˚,τ˚p x piq ; Eq " ş X n´1 f pκq θ˚,τ˚p x p1:κq ; Eqdpx p1:i´1q , x pi`1:κq q -the SAA target distribution.Moreover, that the total number of the unknown weights tθ pjq ˚u is invariant to the population size.The reason why we consider the individuals to be i.i.d.(share common E, tπ j u, tθ pjq ˚u) will become more clear later in the section.PISAA aims at simulating from the distribution f pκq θ˚,τ˚p d¨; Eq at a low temperature τ ˚ą 0. The reason is similar to that of SAA: if tθ pjq ˚u were known, sampling from (3.1) could lead to a random walk in the space of subregions with each subregion being sampled with frequency proportional to tπ j u, for each individual.
Ideally, this can ensure that the lowest energy subregion can be reached, and thus samples can be drawn from the neighbourhood of the global minimum when τ ˚is close to 0. Because tθ pjq ˚u are unknown, PISAA employs a population SAMC (Song et al., 2014;Bornn et al., 2013) embedded with the SA in order to simultaneously approximate their values and sample the population.Therefore, we consider a sequence of population modified Boltzmann distributions tf pκq θt´1,τt pd¨; Equ with density where the temperature sequence tτ t u, gain factor tγ t u, working estimates tθ t u are defined as in Section 2.
PISAA is a recursive procedure that iterates three steps: the sampling update, the weight update, and the truncation step.Although the structure of PISAA is similar to that of SAA, the sapling update and weight update are different and in fact significantly more efficient.
The sampling update, at iteration t, involves simulating a population of κ chains from a Markov transition probability P pκq θt´1,τt p¨, d¨; Eq that admits f pκq θt´1,τt pd¨; Eq as the invariant distribution.The Markov transition probabilities tP pκq θt´1,τt p¨, d¨; Equ can be designed as a mixture of different MCMC kernels.Because it uses a population of chains, PISAA allows the use of advanced updates for the design of these MCMC kernels which facilitate the exploration of the sampling space and the search for the global minimum.Two types of such operation updates are the mutation, and the crossover operations.

• Mutation operations update the population individual-by-individual through Metropolis-Hastings within
Gibbs algorithm (Müller, 1991;Robert and Casella, 2004) by viewing the population as a long vector.
Because the population individuals in (3.2) are independent and identically distributed, in practice the whole population can be updated simultaneously (in parallel) by using the same operation with the same bias weights for each individual.This eliminates the computational overhead due to the generation of multiple chains.Parallel chains allow breaking the sampling into parallel simulations, possibly initialised from different locations, which allows searching for global minimum at different subregions of the sampling space simultaneously.Moreover, it avoids the need to move a single chain across a potentially large and high modal sampling space.Therefore, it facilitates the search for the global minimum and the exploration of both the sample space and partition space, while it discourages local trapping.They include the random walk Metropolis (Metropolis et al., 1953), hit-and-run (Smith, 1984;Chen and Schmeiser, 1996), k-point (Liang, 2011;Liang andWong, 2001, 2000), Gibbs (Müller, 1991;Geman and Geman, 1984) updates etc.
• Crossover operations, originated in genetic algorithms (Holland, 1975), update the population through a Metropolis-Hastings algorithm that operates on the population space and constructs the proposals by using information from different population chains.Essentially, the distributed information across the population is used to guide further simulations.This allows information among different chains of the population to be exchanged in order to improve mixing.As a result, crossover operations can facilitate the exploration of the sample space.Crossover operations include the k-point (Liang, 2011;Liang andWong, 2001, 2000), snooker (Liang, 2011;Liang andWong, 2001, 2000;Gilks et al., 1994), linear (Liang, 2011;Liang andWong, 2001, 2000;Gilks et al., 1994) crossover operations etc.
The weight update aims at estimating tθ pjq ˚u by using a mean field approximation at each iteration with the step size controlled by the gain factor.It is performed by using all the population of chains: At iteration t, the update of tθ pjq u is performed as ř κ i"1 1px piq t P E j q, for j " 1, ..., m, by using the whole population tx p1:κq t u.Thirdly, if }θ 1pjq } 2 ď M ct , truncate such that θ t " θ0 , and c t " c t´1 `1.At the end of the run, t " n, it is possible to apply a θ-normalisation step (see Section 2) -an alternative θ-normalisation step can be θpjq n Ð θ pjq n `z, where z " ´logp ř m j"1 π j exppθ pjq n qq.PISAA is summarised as a pseudo-code in Algorithm 3. A more rigorous analysis about the convergence and the stability of PISAA is given in Appendix A and summarised in Section 3.2.

Theoretical analysis: a synopsis
Regarding the convergence of the proposed algorithm, PISAA inherits a number of desirable theoretical results from SAA (Liang et al., 2014) and pop-SAMC (Song et al., 2014).A brief theoretical analysis related to the convergence of PISAA is included in Appendix A, where we show that theoretical results of Song et al. 2014 for pop-SAMC hold in the PISAA framework as well, and we present theoretical results in Liang et al. (2014) for SAA that hold for PISAA as well.The Theorems A.1, A.2, A.4, and A.5, as well as related conditions on PISAA, are included in the Appendix A. We recall, the temperature ladder: τ t " t 1 { ?t `τ˚, t 1 ą 0, the gain function: γ t " t 0 {t β , t 0 ą 0, β P p0.5, 1q, and consider that X p1:κq t :" pX piq t ; i " 1, ..., κq denotes a draw from PISAA at the t-th iteration.
Parallel and Interacting Stochastic Approximation Annealing algorithms for global optimisation Georgios Karagiannis, Bledar A. Konomi, Guang Lin, and Faming Liang PISAA can achieve for any individual the following convergence result: For any ą 0, as t Ñ 8, and where Jpxq " j if x P E j , and u j " min xPEj U pxq, for j " 1, ..., m.Namely, as the number of iterations t becomes large, PISAA is able to locate the minima of each subregion in a single run if τ ˚is small.This comes as a consequence of Liang et al. (2014, Corollary 3.1) and the Theorems A.1, and A.2 in Appendix A.
Theorem A.1 in Appendix A (a restatement of Theorems 3.1 and 3.2 of Liang et al. (2014)) indicates that the weights tθ t u remain in a compact subset of Θ and hence θ ˚" pθ pjq ˚; j " 1, ..., mq can be expressed in the form θ pjq ˚" c `logp ş Ej expp´U px piq q{τ ˚qdx piq q ´logpπ j q, for j " 1, ...m, and any i " 1, ..., κ, where c P R is an arbitrary constant.Namely, as t Ñ 8, f It is not trivial to show that the results of (Song et al., 2014) for pop-SAMC hold in the PISAA framework as well.The reason is that, unlike in pop-SAMC, in the PISAA framework the target distribution is parametrised by an additional control parameter the temperature ladder tτ t u, and hence the density of the target distribution changes at each iteration.I.e.f θt,τt p¨|Eq ‰ f θ t 1 ,τ t 1 p¨|Eq if t ‰ t 1 in the PISAA framework.
In Appendix A, Lemma A.3 considers the decomposition of the noise in the PISAA framework, and allows us to be able to extent the main theoretical results of (Song et al., 2014) to the PISAA framework as stated in Theorems A.4 and A.5 in the Appendix A. Theorem A.4 implies that the weights tθ t u generated by PISAA are asymptotically distributed according to the Gaussian distribution, and constitutes an extension of (Theorem 2, Song et al., 2014) in the PISAA framework.Theorem A.5 considers the relative efficiency of the bias weight estimate tθ p t u generated by the self-adjusting mechanism of the multiple-chain PISAA (with population size κ) at iteration t, against estimate tθ s κt u generated by the self-adjusting mechanism of the single-chain SAA at iteration κ ¨t.Theorem A.5 implies that pθ p t ´θ˚q { ?γ t and pθ s κt ´θ˚q { ?κγ t follow the same distribution asymptotically with convergence rate ratio κ β´1 , where β P p0.5, 1s, and hence is the extension of (Theorem 4, Song et al., 2014) in the PISAA framework.
In other words, when β ă 1, the multiple-chain PISAA estimator of the bias weights is asymptotically more efficient than that of the single-chain SAA; while when β " 1, the two estimators present similar efficiency.In practice, PISAA estimator is expected to outperform the single-chain SAA estimator even when β " 1 because of the so called population effect; the use of multiple-chains to explore the sampling space and approximate the unknown tθ t u.Theorem A.5 implies rigorously that the adjustment process in PISAA is more stable than that in SAA.
Parallel and Interacting Stochastic Approximation Annealing algorithms for global optimisation Georgios Karagiannis, Bledar A. Konomi, Guang Lin, and Faming Liang

Practical implementation and remarks
Liang et al. ( 2014) discussed several practical issues on the implementation of SAA (including the algorithmic settings tπ j u, tγ t u, tτ t u E, tM c u, n ) that are still applicable to PISAA.Here, we adopt these algorithmic settings, i.e.: π j 9 expp´λpj−1qq with ζ ě 0; γ t " p n pγq maxpt,n pγq q q β with β P p0.5, 1s; τ t " τ h b n pτ q maxpt,n pτ q q `τẘ here τ ˚ą 0, τ h ą 0, and n pτ q ą 0; and M c " 10 10 M c´1 with M 0 " 10 100 .We briefly discuss additional practical details of PISAA: • The population seed x p1:κq 0 controls the initialisation of the population.It is preferable, but not necessary, for the population of chains to initiate from various locations, possibly around different local minima.This could benefit the exploration of the space and the search for the global minimum.This can be achieved, for example, by sampling from a flat distribution e.g.f τ0 pxq9 expp´U pxq{τ 0 q, with τ 0 ą 0 large enough, via a random walk Metropolis algorithm.
• The MCMC operations must result in reasonable expected acceptance probabilities because they can affect the sampling update.It is possible to calibrate the scale parameter of the proposals adaptively (on-the-fly) by using an adaptation scheme (Andrieu and Thoms, 2008), during the first few iterations.
• The rates of the operations in the MCMC sweep at each iteration are problem dependent.One may favour specific operations by increasing the corresponding rates if it is believed that they are more effective or cheaper to run for the particular application.
PISAA can be modified to deal with empty subregions similar to SAA.Let S t denote the set of non-empty subregions until iteration t, θ St t denote the sub-vector of θ t corresponding to elements of S t , and Θ St denote the sub-space of Θ corresponding to elements of S t .Yet, let y p1:nq denote the proposed population value generated during the sampling update, and Jpxq " j if x P E j .Then Algorithm 3 can be modified as follows: • (Sampling update): Simulate x p1:κq t " P pκq θt´1,τt px p1:κq t´1 , d¨; Eq (as in Algorithm 3), and set S t Ð S t´1 Y tJpy piq q; i " 1, ..., κu.
This modification ensures tθ t u to remain in a compact set.Note that the desired sampling distribution becomes actually tπ j `πe ; for j " 1 : mu, and where π e " ř jRS8 π j { }S 8 }, and S 8 is the limiting set of S t .
For population size κ " 1, PISAA is identical to the single-chain SAA.
PISAA can be used, in the same spirit as the tempered transitions (Neal, 1996), for sampling from a multi-modal distribution f pd¨q.One can run PISAA with U pxq :" ´logpf pxqq, τ t " τ h b n pτ q maxpt,n pτ q q `τ˚, τ h ą 1, τ ˚" 1, and collect the sample x p1:κq n .Then, inference can be performed by importance sampling methods due to Theorems A.1 and A.2 in Appendix A.
As a performance measure, we consider the average best function value discovered by the algorithm.
We perform 48 independent realisations for each simulation, and average out the values of the performance measures, in order to eliminate nuisance variation in the output of the algorithms (caused by their stochastic nature or random seeds).To monitor the convergence of PISAA and the stability of its self-adjusting mechanism, we consider the MSE of the bias weights as in (Song et al., 2014)

Gaussian mixture model
We consider the Gaussian mixture with density where x P R 2 , X " r´10 10 , 10 10 s 2 , σ 2 " 0.001, t i " 1{20; i " 1 : 20u, and tµ i u are given in (Table 1 in Liang and Wong, 2001).Sampling from (4.1) is challenging because this distribution is multi-modal and has several isolated modes.Here, our purpose is to check the validity of PISAA instead of optimisation.
We consider default algorithmic settings for PISAA: (i) energy function U 1 pxq " ´logpf 1 pxqq, (ii) uniformly spaced grid tu j u with m " 19, u 1 " 0, and u 19 " 9.0, (iii) gain factor tγ t u with n pγq " 100, β " 0.55, (iv) temperature sequence tτ t u with n pτ q " 1, τ h " 5, and τ ˚" 1 ´τh a 1{n, and (v) MCMC transition probability that uses mutation operations (Metropolis, hit-and-run, k-point) and crossover operations (k-point, snooker, linear), with equal operation rates, and proposal scales calibrated so that the expected acceptance probabilities to be around 0.234.At the end of the simulation, at iteration n " 10 6 , the temperature will be τ n " 1; and hence one may see this example as tempered transition sampling from multi-modal distribution f 1 pd¨q via PISAA.
We run PISAA with different combinations of population size κ P t1, ..., 30u and gain factor power  PISAA at different iteration steps, and for different population sizes.We observe that PISAA with larger population sizes has produced smaller MSEs throughout the whole simulation time.Yet, MSE decays as the iterations evolve; this behaviour, although not surprising, may be non-trivial due to the heterogeneous nature of the sequence tw t u (that is w t ‰ w t 1 , for t ‰ t 1 ). Figure 4.1c presents the progression of the MSEs produced by PISAA for different gain factor powers.We observe that MSE decreases when the population size increases.Furthermore, we observe that this behaviour is more significant for slower decaying gain factors -namely when the power of the gain factor is smaller and close to 0.5.In serial computing environments, the computational cost can be defined as the iterations times the population size.For the computation of relative efficiency REpκ; βq, we considered constant computational cost, and hence PISAA with population size κ ran for tn{κu iterations, where n " 10 6 .
In Figure 4.1d, the marks refer to the estimated relative efficiency, the dashed lines are lines with slop β ´1 and refer to the theoretical behaviour of the relative efficiency (i.e.lgpREpκ; βqq « pβ ´1q lgpκq) from Theorem A.5, while the different colours correspond to different values of β.We observe that the empirical results are consistent with Theorem A.5 since the marks lie close to their corresponding lines.The efficiency of the estimates of the bias weights produced by the self-adjusting mechanism of PISAA improves as κ increases.Thus, increasing the population size improves the stability of PISAA even in the case that a serial computing environment is used and a fixed computational budget is given.We observe that this behaviour is even more significant for slower decaying gain factors.
The results support that, PISAA produces the 'real' estimates for w ˚as τ t Ñ τ ˚, the MSE of those estimates reduces as κ increases, and the efficiency of the self-adjusting mechanism improves as κ increases.; j " 1 : 6u (at n " 10 6 ).
Rastrigin's function has been used by several researchers as a hard benchmark function to test experimental optimisation algorithms (Dieterich and Hartke, 2012;Törn and Zilinskas, 1989;Mühlenbein et al., 1991;Liang, 2011;Liang et al., 2006;Ali et al., 2005).It presents features that can complicate the search for the global minimum: it is non-convex, non-linear, relatively flat and presents several local minima that increase with dimension; e.g. about 50 local minima for d " 2 (Ali et al., 2005).The rotation transformation (4.4) is a well established technique that transforms originally separable test functions, such as the Rastrigin's one, into non-separable.Non-separability makes the optimisation task even harder by preventing the optimisation of a multidimensional function to be reduced into many separate lower-dimensional optimisation tastks.For instance, in (4.4), all the dimensions in vector y are affected when one dimension in vector x changes in value.
One MCMC sweep is considered to be a random scan of mutation operations (Metropolis, hit-and-run, kpoint) and crossover operations (k-point, snooker, linear), with equal operation rates, and scale parameters calibrated so that the expected acceptance ratio to be around 0.234.
In Figure 4.2a, we present the average progression curves of the best function value (best value), discovered by PISAA for different population sizes κ P t1, 4, 5, 14, 30u.We observe that by using larger population sizes, the algorithm quicker discovers smaller best values, and quicker converges towards the global minimum.The difference in performance between SAA (aka PISAA with κ " 1) and PISAA using a moderate population size, such as κ " 5, is significant.In performance of the algorithm significantly at any dimensionality considered, while it is particularly effective in large or moderate dimensionalities.We highlight that the most striking performance improvement is observed in the range of small population sizes.In Figure 4.2c, we observe that the MSE of the bias weights approximated by the self-adjusting mechanism of PISAA becomes smaller when larger population sizes are used.This indicates that increasing the population size makes the self-adjusting mechanism of PISAA more stable.
The performance of PISAA with respect to the grid size, for different desired probabilities, is presented in Figure 4.2d.In particular, we ran PISAA with a large enough population size (κ " 30) to ensure that all the subregions are visited.We observe that larger grid sizes lead to a better performance for PISAA, given that the population size is large enough.We observe that the choice of the desired probability has bigger impact for large grid sizes (m ą 50) than for smaller grid sizes (m ă 50).However, for any grid size, we observe that a moderately biased desired distribution (λ « 0.1, ..., 0.9) is preferable.The performance of PISAA against the population size for different desired probabilities is presented in Figure 4.2e.We observe that increasing the population size is more effective for desired probabilities with (λ « 0.1, ..., 0.9).Hence, although biasing towards low energy subregions is preferable for optimisation problems, over-biasing can slow down the convergence towards the global minimum.Figure 4.2f presents the performance of PISAA against the population size for different grid sizes.The performance improvement of PISAA due to the population size increase becomes more significant when finer grids (larger grid sizes) are used.As mentioned, finer grids improve the exploration of the sampling space, however they require a more efficient self-adjusting mechanism to fight against possible larger variance in the approximation of tθ t u due to the increased number of subregions.Here, we observed that increasing the population size allows the use of finer grids, while it reduces the aforesaid consequence.
We compare PISAA with VFSA using the same operations and temperature ladder as PISAA, and with AESAMC using the settings used by (Liang, 2011), against the 30D Rastrigin's function.In Figures 4.3a and 4.3b, we plot the average progression curves of the best values discovered by each algorithm for population sizes κ " 5, and 14 respectively.We observe that PISAA converges quicker to global minimum than VFSA and AESAMC in both cases.Figure 4.3c presents the performance of the algorithms against the population size.We observe that increasing the population size improves the performance of PISAA, in terms of average best values discovered, significantly faster than the performance of VFSA and AESAMC.It is observed that, although the population size increases, VFSA and AESAMC stop improving after κ " 10, while PISAA continues to improve even after κ ą 10 but at a slower rate.This is because the underline adjustment process of tw t u keeps on improving, in terms of variance, and converges faster as κ increases.Therefore, PISAA outperforms significantly VFSA, and AESAMC.

Protein folding
Proteins are essential to the living organisms as they can carry out a multitude of biological processes, e.g.production of enzymes, antibodies etc.In biophysics, understanding the protein folding mechanism is important because the native conformation of a protein strongly determines its biological function.Predicting the native conformation of a protein from its sequence can be treated as an optimisation problem that involves finding the coordinates of atoms so that the potential energy of the protein is minimised.This is a challenging optimisation problem (Liang, 2004), because (i) the dimensionality of the system is usually high, and (ii) the landscape of the potential energy is rugged and characterised by a multitude of local energy minima separated by high energy barriers.
To understand the relevant mechanics of protein folding, simplified, but still non-trivial, theoretical protein models exist; among them is the off-lattice AB protein model (Stillinger et al., 1993).The off-lattice AB protein model incorporates only two types of monomers A and B, in place of the 20 that occur naturally, which have hydrophobic and hydrophilic behaviours respectively.The atom sequence S i , i P t2, 3, ...u, of a N i -mer, can be determined by a Fibonacci sequence (Stillinger et al., 1993;Stillinger and Head-Gordon, 1995;Hsu et al., 2003) which is defined recursively as S 0 " A, S 1 " B, S i " S i´2 S i´1 and has length given by the Fibonacci number N i " N i´2 `Ni´1 , i ě 2. The atoms are assumed to be linked consecutively by rigid bonds of unit length to form a linear chain which can bend continuously between any pair of successive links.The chain can reside in the 2-, or 3-dimensional physical space which defines the 2D, or 3D off-lattice AB model, correspondingly.
For the 2D AB model (Stillinger et al., 1993;Stillinger and Head-Gordon, 1995;Liang, 2004), the potential   cases of 13-, 21-,34-mers.Compared to the standard SAA (aka PISAA with κ " 1), PISAA (with κ ą 1) presents significantly improved performance, in the 55-mer 2D and 3D AB models when the same number of iterations is considered.Note that increasing the population size of PISAA does not necessarily mean that the CPU time required for the algorithm to run increases significantly because PISAA can be implemented in parallel computational environment if available.In Figures 4.4c and 4.4f, we observe that when PISAA uses larger population sizes, the bias weights generated by the self-adjusting mechanism of PISAA have smaller MSE, and hence the algorithm tends to present a more stable self-adjusting process.
We compare the performance of PISAA with those of VFSA and AESAMC, against the 55-mer 2D, and 3D off-lattice AB models.We run each simulation 48 times independently to eliminate possible variation in the output caused by nuisance factors.About the algorithmic settings: PISAA uses the aforementioned settings, VFSA shares common settings with PISAA, and AESAMC uses an equally spaced partition of 10 4 subregions, temperature τ " 1.0, and threshold values ℵ " 10.VFSA and AESAMC use the same crossover      neighbourhood of the eight adjacencies (vertical, horizontal, and diagonal) of each interior pixel.In Eq. 4.7, the first term is associated to the likelihood and encourages states x i to be identical to the observed pixel y i , while the second term is associated to the Ising prior model, encourages neighbouring pixels to be equal and hence provides smoothing.In this context, image restoration can be achieved by computing the maximum a posteriori (MAP) estimate of the original image which can be found by minimising the negative log posterior density, U 4 pxq :" ´logpπpx|yqq (Geman and Geman, 1984).
Computational difficulties raise when algorithms based on standard MCMC samplers with componentwise structure of single-pixel updates are employed (Higdon, 1998).Such an update design tends to either converge slow or get trapped because the prior term in (4.7) strongly prefers large blocks of pixels.This issue becomes even more serious for large values of b which favour strong dependencies.Against this application, we compare the PISAA, VFSA, and PSAA.PSAA refers to the parallel SAA, a multiple-chain implementation of SAA that involves running a number of standard SAA procedures with the same algorithmic settings in parallel and completely independently.PISAA and PSAA use the following algorithmic settings: (i) n " 5 ¨10 5 iterations, (ii) uniformly spaced grid tu j u with m " 200, u 1 " ´826315.5, u 100 " ´971500.5, (iii) desirable probability with parameter λ " 0.1, (iv) temperature ladder tτ t u with τ h " 5, n pτ q " 10 3 , τ ˚" 10 ´2, (iv) gain factor tγ t u with n pγq " 10 3 , β " 0.55.The MCMC kernel of PISAA is designed to be a random scan of a Gibbs update (updating one pixel at a time) and k-point crossover operations (where k " 2).VFSA and SAA use only Gibbs updates.
We observe that PISAA discovers quicker smaller best values when the population size increases (Figures 4.7a,and 4.7c).Figure 4.7c shows that PISAA converges quicker than PSAA as the number of the parallel chains involved increases.This implies that it is preferable to run a PISAA with a population size κ ą 1   rather than run κ SAA procedures completely independent from each other.Moreover, it shows that the interacting character of PISAA is a necessary ingredient for significantly improving the performance of the algorithm by increasing the population size.By 'interacting character' of PISAA, we refer to the distinctive way that the crossover operations and self-adjusting mechanism of PISAA use the distributed information gained from all the population chains to operate.Moreover, we observe that PISAA outperforms VFSA (Figures 4.7b,and 4.7c).Finally, the MAP estimate of the original image as computed by PISAA with population size 30 is shown in Figure 4.6b.

Bayesian network learning
The Bayesian network (Ellis and Wong, 2008) is a directed acyclic graph (DAG) whose nodes represent variables in the domain, and edges correspond to direct probabilistic dependencies between them.It is a powerful knowledge representation and reasoning tool under conditions of uncertainty that is typical of real-life applications.Mathematically, it can be defined as a pair B " pG, ρq, where G " pV, Eq is a DAG representing the structure of the network, V denotes the set of nodes, E denotes the set of edges, and ρ is the vector of the associated conditional probabilities.In the discrete case we consider here, V :" tV i ; i " 1 : du P V denotes a node that takes values in a finite set tv j ; j " 1 : r i u, r i P N ´t0u and hence V is assumed to be a categorical variable.Therefore, there are q i " ś Vj PpapViq r j possible values for the joint state of the parents of V , where papV i q denotes the set of parents of V i node.In this example, we consider the prior model of Ellis and Wong (2008); Liang and Zhang (2009), and hence we focus our interest in the marginal posterior probability PrpG|Dq such that Γpa i,j,k `ni,j,k q Γpa i,j,k q , (4.8) where D " tV i ; i " 1 : N u denotes the data set, considered to be IID samples, n i,j,k denotes the number of samples for which V i is in state j and papV i q is in state k, a i,j,k " pr i q i q ´1 (Ellis and Wong, 2008), and b P p0, 1q (here, b " 0.1 (Liang and Zhang, 2009)).The negative log-posterior distribution function, or else energy function, of the Bayesian network is U 5 pGq :" ´logpPrpG|Dqq.
Existing methods for learning Bayesian networks include conditional independence tests (Wermuth and Lauritzen, 1982), optimisation (Heckerman et al., 1995), and MCMC simulation (Madigan and Raftery, 1994;Liang and Zhang, 2009) approaches.Often interest lies in finding the maximum a posteriori (MAP) putative network that can be performed by minimising the negative log-posterior distribution density U 5 p¨q.Deterministic optimisation procedures often stop at local optima structures.Standard MCMC based approaches, although seemingly more attractive (Liang and Zhang, 2009), are still prone to get trapped in local energy minima indefinitely.This is because the energy landscape of the Bayesian network can be quite rugged, with a multitude of local energy minima being separated by high energy barriers, especially when the network size is large.Here, we examine the performance of PISAA against this challenging optimisation problem.
We consider the Single Proton Emission Computed Tomography (SPECT) data set (Cios et al., 1997;Kurgan et al., 2001) We examine the performance of PISAA as a function of the iterations and the population size, and compare it with those of PSAA, and VFSA.PISAA uses algorithmic settings: (i) n " 2 ¨10 8 iterations, (ii) uniformly spaced grid tu j u with m " 2001, u 1 " 2000, u 2001 " 3999, (iii) desirable probability with parameter λ " 0.05, (iv) temperature ladder tτ t u with τ h " 50, n pτ q " 1, τ ˚" 10 ´1, (iv) gain factor tγ t u with n pγq " 10 6 , β " 0.55.The MCMC kernel is designed to be a random scan of mutation operations only (temporal order, skeletal and double skeletal suggested by (Liang and Zhang, 2009;Wallace and Korb, 1999)) with equal operation rates.PSAA and VFSA share common settings with PISAA.Each simulation runs for 48 times to eliminate output variations caused by nuisance factors.The MAP putative network computed by running PISAA with population size 30 and 2 ¨10 8 iterations is shown in Figure 4.9

Summary and conclusions
We developed the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that builds upon the ideas of the stochastic approximation annealing and population Monte Carlo samplers.PISAA inherits from SAA a remarkable self-adjusting mechanism that operates based on past samples and facilitates the system to escape from local traps.Furthermore, the self-adjusting mechanism of PISAA is more accurate and stable because it uses information from all the population of chains.Yet, the sampling mechanism of PISAA is more effective because it allows the use of advanced MCMC transitions such as the crossover operations.Furthermore, it breaks sampling into multiple parallel procedures able to search for minima at different sampling space regions simultaneously.This allows PISAA to demonstrate a remarkable performance, and be able to address challenging optimisation problems with high dimensional and rugged cost functions that it would be quite difficult for SAA to tackle acceptably.The computational overhead due to the generation of multiple chains can be reduced dramatically if parallel computing environment is available.
We examined empirically the performance of PISAA against several challenging optimisation problems.
We observed that PISAA significantly outperforms SAA in terms of convergence to the global minimum as it effectively mitigates the problematic behaviour of SAA.Our results suggested that, as the population size increases, the performance of PISAA improves significantly in terms of discovering the global minimum and adjusting the target density.Precisely, when the population size increases, PISAA discovers the global minimum quicker, and the adjustment of the target density is more stable.More importantly, we observed that instead of running several SAA procedures completely independently, it is preferable to run one PISAA procedure with the same number of chains (or equiv.population size).In our examples, PISAA significantly outperformed other competitors, such as SA and ASAMC, and their population analogues, such as VFSA and AESAMC.In fact, it was observed that as the population size increases, the performance of PISAA improves significantly quicker than that of VFSA and AESAMC.
Under the framework of PISAA, we showed that theoretical results of Song et al. (2014) for pop-SAMC regarding the asymptotic efficiency of the estimates of the unknown bias weights hold for PISAA as well, and presented theoretical results of Liang et al. (2014) for SAA regarding the convergence of the algorithm that hold for PISAA as well.The empirical results confirmed that PISAA produces correct estimates for the unknown bias weights w ˚as τ t Ñ τ ˚, and that the efficiency of these estimates significantly improves as the population size increases.Moreover, the theoretical limiting ratio between the rates of convergence of their estimates generated by PISAA and SAA was also confirmed by our empirical results.
Another important use of PISAA could be that of sampling from multi-modal distributions and then performing inference via importance sampling methods.PISAA can be extended to use an adaptive binning strategy for automatically determining the partition of the sampling space similar to (Bornn et al., 2013), or a smoothing method to estimate the frequency of visiting each subregion similar to (Liang, 2009).Of (i) The function h τ pθq is bounded and continuously differentiable with respect to both θ and τ , and there exists a non-negative, upper bounded, and continuously differentiable function v τ pθq such that for any ∆ ą δ ą 0, sup δďdppθ,τ q,Lqď∆ ∇ T θ v τ pθqh τ pθq ă 0, (A.2) where L " tpθ, τ q : h τ pθq " 0, θ P Θ, τ P T u is the zero set of h τ pθq, and dpz, Sq " inf y t}z ´y} : y P Su.Further, the set vpLq " tv τ pθq : pθ, τ q P Lu is nowhere dense.
(A.3) pA 2 q (Doeblin condition) For any given θ P Θ and τ P T , the Markov transition kernel P θ,τ is irreducible and aperiodic.In addition, there exist an integer l, 0 ă δ ă 1, and a probability measure ν such that for any compact subset K Ă Θ, inf θPK,τ PT P l θ,τ px, Aq ě δνpAq, @x P X , @A P B X , where B X denotes the Borel set of X ; that is, the whole support X is a small set for each kernel P θ,τ , θ P K and τ P T .
pA 4 q (Conditions on tγ t u and tτ t u) (i) The sequence tγ t u, which is defined to be γptq as a function of t and is exchangeable with γptq in this paper, is positive, non-increasing and satisfies the following conditions: 2. accept G 1p1:κq " pG p1:i´1q , G 1piq , G pi`1:κq q with prob.a TO " minp1, f θ,τ pG 1piq |Eq f θ,τ pG piq |Eq q • Skeletal change: For i " 1, ..., κ: 1. compute G 1piq by adding or deleting an edge between a pair of randomly selected nodes.
2. accept G 1p1:κq " pG p1:i´1q , G 1piq , G pi`1:κq q with prob.a DS " minp1, f θ,τ pG 1piq |Eq f θ,τ pG piq |Eq q Remark B.1.The scale parameters of the proposals of the operations were tuned during pilot runs using the adaptation scheme: logpσ 2 MRW q Ð logpσ 2 MRW q`ra MRW ´0.234s; this ensures that the associated expected acceptance probabilities will be around 0.234.In our applications, the performance of this adaptation scheme was acceptable, however more sophisticated schemes can be used.For more adaptive Metropolis-Hastings schemes see (Andrieu and Thoms, 2008).
are the real values of the bias weights, and θ pκq t are the estimates of w t approximated by the self-adjusting mechanism of PISAA with population size κ.The mutation operations and crossover operations, used in the examples, are presented in Appendix B as pseudo-codes.

β
P t0.55, 0.65, 0.75, 0.85, 0.95, 1.0u.Each of these runs was repeated for 100 realisations in order to compute the estimates, error bars, mean square error MSE :" bias weights.Note, that the bias weights are estimated by the self-adjusting mechanism of PISAA using the θ-normalisation step ř

Figure 4 .
Figure 4.1a presents the estimates of the bias weights θ pκq tn{κu , for j " 1, ..., 6, and n " 10 6 , as produced by the self-adjusting mechanism of PISAA with different population sizes n P t1, 10, 30u.We observe that the tθ pκ,jq n u of PISAA have converged to the true values at any of the population sizes considered, and that the associated error bars are narrower for larger population sizes.Figure 4.1b presents the MSEs produced by

Figure 4 .
Figure 4.1d presents the relative efficiency REpκ; βq of the self-adjusting process estimator for the biased weights w ˚as a function of the population size κ P t2, ..., 30u, and for different powers of gain factors β P t0.55, 0.65, 0.75, 0.85, 0.95, 1u.In serial computing environments, the computational cost can be defined Estimate and 95% error bars of tw pκ,jq n Progression curves of the MSE estimate of w pκ,jq for different population sizes.(β " 0.55).MSE against the population size, for different power β of gain factor (at n " 10 6 ).
Relative efficiency of the bias weight estimate as a function of the population size, for different power of gain factor.

Figure 4
Figure 4.1: (Section 4.1) Estimates, MSEs, and relative efficiency of the bias weights tw pjq n u produced by PISAA at different iteration, population sizes, and power of gain factor β.
Figure 4.2b, we plot the best value against the population size for different dimensionality of the Rastrigin's function.We observe that PISAA discovers smaller best values as the population size increases for the same number of iterations.Increasing the population size improves the Parallel and Interacting Stochastic Approximation Annealing algorithms for global optimisation Georgios Karagiannis, Bledar A. Konomi, Guang Lin, and Faming Liang Average progression curves of the best function values, for different population sizes.
Average best function values against the population size, for different dimensionality.
Progression curves of the MSE estimate of wt, in lg-scale, for different population sizes.(β " 0.55).Average best function values as functions of the grid size, for different values λ of the desired distribu-Average best function values as functions of the population size, for different values λ of the desired dis-Average best function values as functions of the population size, for different grid sizes.

Figure 4
Figure 4.2: (Section 4.2) Performance plots of PISAA.The results reported consider averaged values over 48 independent runs.
Average best function values generated by PISAA, AESAMC, and VFSA against the population size.
Progression curves of the MSE estimate of wt for different population sizes.(β " 0.55).
Progression curves of the MSE estimate of wt for different population sizes.(β " 0.55).

Figure 4
Figure 4.4: (Section 4.3) Performance plots of PISAA.The results reported consider averaged values over 48 independent runs.The 1st and 2nd rows refer to the 2D and 3D AB models correspondingly.
Average progression curves of the best function values discovered by PISAA, AESAMC, and VFSA with population size 3.
Average progression curves of the best function values discovered by PISAA, AESAMC, and VFSA with population size 30.
Figure 4.6: (Section 4.4) Gray scale digital photo-micrograph of the micro-structure of the Ferrite-Pearlite steel, and the MAP estimate of its, red in colour, framed fragment.
Average progression curves of the best function values discovered by PISAA, PSAA, and VFSA with population size 20.
Average best function values generated by PISAA, PSAA, and VFSA against the population size.

Figure 4 .
Figure 4.8a presents the average progression curves of the best values discovered by PISAA at different population sizes.We observe that increasing the population size accelerates the convergence of the algorithm towards smaller best values.Figure 4.8b shows the best function values discovered by PISAA, PSAA, andVFSA using 30 chains each.We observe that PISAA tends to discover smaller best values quicker than PSAA and VFSA.In Figure4.8c,we present the best values discovered by the algorithms under comparison after 2¨10 8 iterations as functions of the population size.We observe that PISAA has discovered smaller best values than PSAA and VFSA.A reader, non-familiar to the Bayesian network modelling, might argue that the observed improvement in performance of PISAA due the population size increase is not that eye-catching in Figure4.8c because of the decisively small slope of the curve.In Bayesian networks(Liang and Zhang, Average best function values discovered by PISAA, PSAA, and VFSA against the population size.

Figure 4
Figure 4.9: (Section 4.5) MAP estimate GMAP of the putative network, as computed by running PISAA with population size 25 for 2 ¨10 8 iterations.(U5pGMAPq " 3026.935103) step.Essentially, the function H τt pθ t´1 , x t q is computed by only one single observation: at iteration t, p t in Algorithm 2 is an m-dimensional vector of 0 & 1 (occurrence & absence) indicating to which subregion the sample x t belongs.Even after a long run, this can cause a large variation on the estimate of tθ t u and slow down severely the convergence of tθ t u, especially if the number of subregions m is large.Consequently, the adjustment of the target density becomes quite unstable and the self-adjusting mechanism becomes less effective.That can slow down the convergence of SAA, or even cause the chain to be trapped in local minima.
Parallel and interacting stochastic approximation annealing algorithmRequires : Insert tτ t u, tγ t u, tE j u, tπ j u, tM c u, κ, θ0Initialise : At t " 0, set x P X κ , θ0 P Θ, such that } θ0 } 2 ă M 0 , and c 0 " 0. Set θ t " θ 1 , and c t " c t´1 if }θ 1pjq } 2 ď M ct , or set θ t " θ0 , and c t " c t´1 `1 if otherwise.Eq .Secondly, update the working estimate θ t according to θ 1 " θ t´1 `γt H pκ,jq t , j " 1 : mq, and p t P E j q, for j " 1, ..., m.Intuitively, because all the population chains share the same partition E and bias weights tθ ˚u, and the population individuals are independent and identically distributed, the indicator functions of p t (used in Algorithm 2) can be replaced here by the proportion p pκq t of the population in the associated subregions at each iteration.Namely, the indicator functions of p t (in Algorithm 2) is replaced by the law of the MCMC chain associated with the current parameter.A theoretical analysis in Appendix A shows that the multiple-chain weight update (in Algorithm 3) is asymptotically more efficient that the single-chain one (in Algorithm 2).The truncation step applies a truncation on θ t to ensure that θ t lies in a compact set Θ as in SAA; hence we consider quantities θ0 , tM c u, and tc t u as in Section 2.The proposed algorithm works as follows: At iteration t, we assume that the Markov chain is at Algorithm 3 , available at UC Irvine Machine Learning Repository 2 that describes diagnosing of cardiac SPECT images.It includes 267 SPECT image sets (patients) processed to obtain 22 binary feature patterns that summarise the original SPECT images.Each patient is classified into two categories: normal, and abnormal.