The following optimisation methods are also described in https://www.overleaf.com/read/hvrnfjdtgygs
In the FPC voting protocol, nodes can query other nodes for their opinion directly, or alternatively nodes may also gossip their opinions. The latter may be for example necessary if nodes are queried with a probability proportional to their mana and the queried node is rich in mana. Since the node in question has sufficient mana, it can send e.g. data txs to reduce the necessary communication overhead.
We would like that gossiped opinions in FPC require a minimum amount of communication overhead. We must make some assumptions for the communication model. More specifically we assume that the gossiped opinions follow a probabilistic synchronous model (PSM), in which a majority proportion \gamma of the messages is delivered within a bounded time with probability of at least 1-\epsilon.
We require that the gossiped opinion and the outcome of the votes locally to a node must follow Monotonicity and Consistency rules:
- Monotonicity: A tx x can only obtain the opinion 1 (like), if all its txs in its past cone have the opinion 1. If a tx x has obtained the opinion 0 (dislike) in a given FPC round all the txs in its future cone have the opinion 0.
- Consistency: If the transactions tx_1,.. ,tx_n form a conflict set then there can at most be one tx that obtains the opinion 1.
We can apply the following optimisations (with increasing requirements on the PSM):
- Minimum number of consecutive rounds: it is likely that after a number of rounds m, a sufficiently large proportion of nodes have received at least one of the gossiped opinion messages w.h.p. Therefore unchanged opinions are only required to be send for at least m rounds.
- Monotonicity: Nodes can apply monotonous implications for other txs by only sending the oldest disliked or the latest liked tx. Hence probabilistic synchrony assumptions are applied. For missing liked txs the receiving node can request the missing tx from its neighbours since it received the (compressed) tx Hash.
- Consistency: Similarly this method can be applied to conflict sets, where only the liked tx (if there is any) of the conflicting txs is sent. However this requires stronger assumptions on the probabilistic synchrony model, since conflict sets can be very large.
- Monotoncity + Consistency: If it is assumed that nodes have a high level of probabilistic synchrony (or even synchrony) for the gossiped txs, we can combine both of this compression methods. This allows that we can apply Monotonicity rules on disliked txs that are not explicitly mentioned in the opinion-gossip message. However this increases the requirements on the visibility of conflict sets noticeably.
In addition we can also apply compression methods to the information that is gossiped to identify a tx (which we call the identifier). Generally the entire information of a tx, such as tx-Hash, time stamp, etc. is not required to identify a message. We can employ a reduced amount of information at the expense of allowing for an increased likelihood for collisions. Two methods are discussed here:
- prefix trie: For the prefix trie approach it is sufficient to send parts of the tx-Hash. An advantage of this method is that if the listening node receives a reduced identifier for a tx that it does not have already it can request the missing tx from its neighbors by asking with the compressed identifier.
- Cuckoo filter: A cuckoo filter is a space-efficient probabilistic data structure, which similarly to the prefix trie approach returns whether a tx is in a set definitely not or possibly.
Since in both approaches the compression method is deterministic, it is possible, that collisions are mineable. We can avoid this by requiring that the gossiping node hashes the identifier together with a seed before applying the prefix trie or Cuckoo filter approach. It then sends the data (prefix trie or cuckoo filter) together with the seed. However this requires additional numerical effort both on the side of the gossiping node and the receiving node since they have to hash each tx with every new seed.