Annealing Cellular Automaton consensus

In this post, we describe the Annealing Cellular Automaton (ACA) consensus mechanism. The current version is the next iteration of this idea which started with the change of the transition rules in CA from majority rule to Glauber-like dynamics. In this version, we introduced additional features inspired by ‘‘inertia’’ that eliminates problems with an increased value of parameter T (temperature) which increases fluctuations in the system. Associated randomness helps to avoid agreement failure (domain walls). Without inertia terms in the transition rules, high randomness could cause integrity failure due to the spontaneous popups of random opinion bubbles in the bulk of the system. This is frequent behavior in the temperature above the so-called critical temperature T_c.

Proposed consensus mechanism hopes to combine:

\qquad \bullet Proofs of opinions
\qquad \bullet Low communication load
\qquad \bullet Very fast communication between peered nodes
\qquad \bullet Randomness of the protocol (required to evade FLP theorem)

At the same time proposed protocol aims to mitigate problems with the standard (majority) CA consensus which is not secure against agreement failures (metastable/quenched states). The proposed solution also mitigates problems with the trivial application of Glauber-like transition rules which were adopted in the previous versions of ACA - integrity failures. The current version of ACA also aims at decreasing the complexity of the proposed consensus mechanism by decreasing the number of moving parts (annealing requests).

{\LARGE \text { Motivations:}}

Thermodynamics of the ferromagnetic systems, in particular, statistical physics models like Ising, Potts or Heisenberg models. Such systems when celled down adiabatically from the initial random state (which correspond to high temperature) end up in a ground state where all vertexes are pointed in the same direction (unanimity).

In general Ising-like models can get quenched i.e.: starting from random initial conditions on certain graphs the system can end up the metastable state. Metastable states are local but not the global minimum of energy. That is because pairs of neighboring spins in on the opposite sides of domain walls are energetically costly. Metastability means that no local change of spins can lower the energy of the system and cause all of the spins to point in the same direction. Boundaries of the different spins areas are called domain walls.

Quenching of the system can be avoided in multiple ways. One of the simplest methods is an evolution with slowly decreasing temperature. This approach is called annealing and allows the system to pass through a local minimum of energy and move to the global one. When evolution is slow enough the system is guaranteed to end up in a global minimum. Other possible options to make domain walls unstable are magnetic field terms that break the symmetry of the Hamiltonian governing evolution of the system. Such symmetry breaking term makes domain walls unstable as opposed to being metastable like in the case described above. However, the introduction of such a field favors one spin (opinion) over the other and should be used with caution.

{\LARGE \text {Set up: }}

We assume that there exists an autopeering system that produces graphs closely resembling k-regular graphs with the verifiable random neighborhoods. Proposed autopeering systems that might have those properties are:

\qquad \bullet Salt-based autopeering
\qquad \bullet Arrow autopeering

Both autopeering systems are currently analyzed. However, arrow autopeering seems to be better at providing proofs of random autopeering. On the other hand, it requires DRNG with perfect agreement among nodes. This discussion should be continued in another thread.

When a node receives a second of the pair of conflicting transactions it sets counter for this particular conflict voting to zero. Nodes can be at different voting rounds for different conflicts. This allows us to assume that all nodes are in approximately the same counting round for a particular conflict.

All nodes have implemented the same pseudo-random number generator (not DRNG). It will be used later to confirm that the node followed protocol in transition rules that require randomness. The exact procedure for producing and verifying randomness will be discussed later.

{\LARGE \text {Voting protocol:}}

Upon voting on a given conflict each node has a number of current rounds associated with this conflict. Since the k-regular graph is expected to be ‘‘small-world’’ like graph we expect all nodes should have roughly the same round id. Two nodes in a distance d\in \mathbb{N} should have round number no greater than d.

The round number is used to derive the ‘‘temperature’’ T of given voting round. Temperature is responsible for randomness during voting. Bigger T increases the probability that the node accepts minority opinion. On the other hand, T=0 results in no randomness – node follows majority opinion (standard CA). For different T nodes will have different transition rules (probabilities). What is important is that all of the users have ‘‘roughly’’ the same temperature in annealing rounds and in cold rounds.

{\Large{Global \; annealing:}}

The protocol involves changes to the temperature in different rounds. We periodically:

\qquad \bullet ‘‘Heat up’’ the system
\qquad \bullet Allow it to slowly cool down to zero (annealing)
\qquad \bullet Leave it for a few rounds in zero temperature to find energy minimum

This procedure can repeat.

{\textbf{Heating up and annealing (+ inertia term):}}

We introduce a symbol

(R, T,\varepsilon),

which represents a sequence of R rounds called ‘‘phase’’ where temperature drops linearly from T to zero i.e.: T, \frac{R-1}{R}T,\frac{R-2}{R}T,…, \frac{1}{R}T in the first and consecutive rounds. The parameter \varepsilon is an inertia parameter which was introduced to mitigate problems with integrity failures for higher T.

Such a round sequence corresponds to the heating up and annealing of the system.

Note that for T =0 annealing (R,0,\varepsilon) is simply zero temperature voting (majority CA protocol).

An exact number of rounds and heating up/annealing phases should be determined with simulations. We want to start our analysis with 3 phases of annealing separated by phases of majority voting.

(R, 0,\varepsilon) (R, T_1,\varepsilon)

(R, 0, \varepsilon) (R, T_2,\varepsilon)

(R, 0 ,\varepsilon) (R, T_3,\varepsilon)

And the finalising round

(R, 0,\varepsilon)

Where temperatures T_1, T_2, T_3 are approaching critical temperature T_c. On a graph, such evolution would look like the following.

tem

{\LARGE \text {Local voting:}}

{\Large{ Local \; voting \; in \; the \; zero \; temperature:}}

Now we describe transition rules in zero temperature. Assume that in a current round node x holds opinion A \in \{0,1\}. Assume further that m of his neighbors also hold opinion A and n hold \neg A. Remember that the number of neighbors of a given node m+n=k^* \leq k as some of the nodes may not have all peer lists filled up. Then in the next round node, x is going to hold opinions

{\text{opinion of } x {\text{ in the next round }}} = \left\{ \begin{array}{ll} A & \mbox{ if } m >n,\\ \neg A & \mbox{ if } m=n,\\ \neg A & \mbox{ if } m<n. \end{array} \right.

It might seem unnatural to require vote change when n=m, however, this behavior is motivated by the analysis of ferromagnetic statistical models with Glauber dynamics. If the node holds its opinion in the case of a draw then the system quickly reaches a blocked configuration - nodes can not reach unanimity.

However, as the simulations of the N-dimensional Ising model show for any positive probability of changing opinion for n=m ferromagnetic steady state is reached. Since opinion change probability equal to one is the only nonzero deterministic probability we adapt it as a transition rule in our protocol.

{\Large{ Local \; voting \; in \; the \; nonzero \; temperature:}}

For the rounds in nonzero temperatures, we adapt the following transition rules

Assume that in a current round node x holds opinion A. Assume further that m of his neighbors also hold opinion A and n hold \neg A. Remember that m+n = k^* \leq k, where k^* is an actual number of neighbors which is always smaller or equal to k. Then the probability of x changing opinion to \neg A equals

\begin{eqnarray*} P(x: A \rightarrow \neg A )= \left\{ \begin{array}{ll} 1 & \mbox{if: } m <n, \\ \exp((n-m)/T) & \mbox{if: } m \geq n \geq k^*\cdot \varepsilon,\\ 0 & \mbox{if: } k^*\cdot \varepsilon > n .\\ \end{array} \right. \end{eqnarray*}

To sum up, the node always flips opinion if the majority of its neighbors hold the opposite opinion. When the majority of neighbors hold the same opinion node can switch to minority opinion with exponentially decreasing probability. However, when the majority opinion is held by less than \varepsilon \cdot k^* node will never switch to the majority opinion. This \varepsilon cut off can be associated with inertia property. It prevents integrity failures by stopping popups of bubbles of random opinions in the bulk of the system.

{\LARGE \text { Verified randomness:}}

If the protocol is to require opinion justification, we must propose a scheme in which a node proves that it made a random choice with probability \exp((n-m)/T). This is where we utilize the fact that nodes implemented identical pseudo-random number generator. To prove that it made a random choice it is required to

Calculates the seed for the pseudo-random number generator, which is to be a sum of:
\qquad \bullet Hash of transaction in question
\qquad \bullet Node identity id

Then the node calculates the first pseudo-random number r from the interval {\rm{unif}}[0,1) using the seed. Then it makes a choice with probability p if and only if

r \leq p .

In the consecutive rounds, the node uses the previous random number as an input of the pseudo-random number generator.

Since all of the nodes use the same pseudo-random number generator and seed can be calculated by anyone submitted opinions can be verified.

Implemented random number generator does not have to be of good quality. It is enough that it covers an interval [0,1) uniformly.

Note that the seed of the pseudo-random number generator is different for different nodes (nodes use their id). This assures that different nodes use different random numbers r_n, which might be crucial in avoiding integrity failure. Otherwise, all network participants could select the same very small random number r which might cause collective opinion switching.

{\LARGE \text {Protocol termination:}}

We terminate the protocol when the last phase of the protocol ends. The exact values of parameters are to be determined by simulations. In the example from the graph, the protocol would end after 70 rounds of voting.

Another option is to use the notion of \varepsilon quasi final nodes QF^N_{\varepsilon} which stop voting after finding that 1-\varepsilon majority of their neighbors sticking with the same opinion. Such a node would stop voting and send its neighbors special vote justification which can be used in all next rounds. Such a node would play a similar role as a node whose decision was already finalized in FPC. However, on the contrary, QF^N_{\varepsilon} node could still be listening to its neighbors and if required could come back to voting. It could come back if the majority of 50\% or 1-\varepsilon of his neighbor changed opinion. This, however, introduces certain space to abuse the protocol. For example malicious, neighbors could still use QF^N_{\varepsilon} final opinion even though the node is back in voting.

{\LARGE \text {Modifications to the protocol:}}

This stuff can be added to the protocol with a relatively small cost

  • Color voting (based on Potts rather than Ising model)

  • Weighing votes by mana

  • \eta frustrated nodes and repeering

  • Long-range interactions (beyond nearest neighbors)

  • Discussion of temperature scales (critical temperature T_c)

{\LARGE \text {TO DO:}}
Further Reseach on this protocol should include:

  • Calculations of the critical temperature for (a) finite k-regular graphs (b) graphs obtained from autopeering (c ) mana based autopeering graphs

  • Simulations of the consensus mechanism in attacker free environment and accessing the probability of agreement and integrity failure. Derivation of the best parameters for ACA, which includes: \varepsilon, T_i, R_i,k. Scaling of the probabilities with number of nodes N

  • Assessment of Bizantine resilience in mana free environment: Assume that a clique of q malicious nodes bypassed autopeering system. Those nodes successfully peered in a way that each of them peers with k/2 other malicious nodes and can justify any opinion (without additional berserk detection mechanisms). Then we want to asses how the probability of agreement and integrity failure behaves as q/N grows.

  • Assessment of Bizantine resilience with mana: Assume that nodes reveal mana distribution that follows Zipf’s law and an attacker controls a lots of nodes with low mana. We want to analyze how agreement and integrity failure behaves as the number of controlled nodes grows.

All problems mentioned above are very hard to analyze analytically and would need to be examined with simulations.

1 Like