(almost) URTS on a subset

As we know, the tip selection procedure is not enforceable, or at least not easily enforceable (we could still do some statistical analysis to know what a node is really doing; I recall some work in this direction done by @darcy.camargo et al.). It is therefore useful to consider various types of tip selection algorithms; after all, it is likely that in the real-world system several of them will be in use (due to the inherent freedom of choosing one).

In this note we aim to describe a tip selection rule which is a reasonable modification of URTS (= Uniformly Random Tip Selection) algorithm considered in the original whitepaper.

Consider a specific node j. For any transaction v, let us denote by

  • t(v) the objective (signed) timestamp of the transaction;
  • r_j(v) the time of solidification of the transaction for this node (i.e., the time when a transaction and all of its history is received by the node j).

Also, for two transactions u,v where v approves u, we introduce the sibling number s_j(v,u)\in \mathbb{N} (again, with respect to the node j) in the following way. If v_0,\ldots ,v_k approved u directly and the node j heard them in that consecutive order, then s_j(v_i,u)=i.

Fix positive constants C_1,C'_1,C_2 (which are not very large) and M (which is large). Suppose that the node j thinks that v is a tip, and v approves v_1 and v_2. Then, we define the weight w_j(v) of the transaction v for the node j in the following way:

  • if t(v)\notin [r_j(v)-C_1,r_j(v)+C'_1] (i.e., the transaction’s timestamp is either too much to the past or too much to the future), then set w_j(v)=0;
  • if \max_{i=1,2} (r_j(v)-t(v_i))>M , then set w_j(v)=0 (this is for the “below max depth”).

Otherwise (if the weight w_j(v) was not set to 0 according to the above rules) define

k_j(v) = \#\{i=1,2: r_j(v) - r_j(v_i)\leq C_2\}

to be the number of “nonlazy approvals” that v did. Then, it would be natural for the $j$th node
to set w_j(v):= k_j(v), and then, for the tip selection, chose v with probability proportional to w_j(v).

It is not totally clear, though, how close the above method is to a Nash equilibrium, and how resilient is it against some other types of misbehavior (for example, deliberate attachments to not-so-old not-tips).
In view of this we may consider further modifications of this method.
Let~f:\mathbb{N}\to[0,1] be some nonincreasing function with f(1)=1 (for example, f(m)=\alpha^{m-1} for some \alpha\in (0,1]).
Then, set

w_j(v) = f(s_j(v,v_1))\mathbf{1}\{r_j(v) - r_j(v_1)\leq C_2\} + f(s_j(v,v_2))\mathbf{1}\{r_j(v) - r_j(v_2)\leq C_2\}. \qquad (1)

Again, we then just chose v with probability proportional to w_j(v) for tip selection.
To explain why should this be more “Nash-equilibrish”, recall the main idea of the “Equilibria in the Tangle” paper: if some “greedy” actors start favoring a smaller subset of tips, they would create a “competition” there, and be penalized if the weights are defined as in (1). It should also be more resilient against the aforementioned attack (when you are attaching to non-tips, you sibling
number increases).

Overleaf: (almost)URTS on a subset

By the way: it’s probably a good idea to update sibling numbers if we dislike (definitively) some of the tips. Just, well, seems to a right thing to do :slight_smile:

Let say I create a lazy tip $v$, then I immediately create another tip $u$, then if I understand your filter correctly $u$ will be not lazy.
Maybe we can make a rule that any new tip that comes that approves a lazy tip is already lazy?
I wasn’t in the meetings so maybe you discussed this :stuck_out_tongue:, but I am not seeing it written down anywhere.

We also need to make sure we don’t violate belowMaxDepth. Just making sure whoever is writing the spec doesn’t forget :slight_smile:

Frankly, don’t think it’s a good idea: a tip can become lazy by accident, so we should allow to promote it (that is, approve that tip together with some recent tip).

But if a tx approves two lazy tips, then it should be considered lazy indeed.

From a conversation with the research team:
We should filter and not gossip transactions which are “below max depth”. Meaning transactions that indirectly reference an old enough milestone or a too-large subtangle should be discarded.

Currently I am writing the issues for IRI.

  1. I will appreciate if anyone has feedback on what the values of the constants should be ($C_1$, $C’_1$, $C_2$ )? Should they be configurable per node (maybe it makes sense because different nodes may have different latency)? I suppose those are more engineering questions, but maybe research guys have opinions here :stuck_out_tongue:

  2. Due to what I understood from @mthcl on slack, I am currently dropping $(1)$ and using binary weight. Meaning a unified random selection on the tips.

  3. Another filter criteria I would like to add is to make sure that at least one of the tips we choose references the last milestone (the other tip can pick left behinds). This is because I believe that in compass tip selection we should enforce a milestone chain. The reason is that if we don’t have a chain we may encounter a problem of out of order milestones. Imagine that milestone $n$ funds address $A$. Now milestone $n+1$ spends from $A$. If a node solidifies on n+1 before it solidifies on n then we have an issue…
    This can be fixed if we don’t want to enforce a milestone chain. Currently IRI has a reset option to fix such a problem. Hornet can request milestone by index (IRI will be able to do this as well in the near future).
    Still, the question remains if we want to enforce a milestone chain. If we don’t this means that instead of having a tangle that necessarily grows in a certain direction, we may have have a tangle that may have an undefined blob shape (assuming people will create tip explosions, else the blob should merge back to a defined shape I think). This may also affect the “Below Max Depth” condition (referencing old milestones will be less bad).

Hmm… dunno. Set them all to 30 sec or smth, and let’s see?..

Maybe I was not clear enough then :slight_smile:
I think it’s still better to penalize transactions that approve one lazy tip by decreasing their weight as in (1). Otherwise, nodes could feel incentivized to always approve 1 old tip and one new (which would be quite undesirable, because the number of recent tips wouldn’t decrease in this case).

C_1 and C'_1 are strange timestamp detection, so it’s ok to be 30 sec, but C_2 is the Lazy Threshold, it should be 100-120 seconds at least from our BerSum discussion.

Yes, maybe better to make C_2 larger than C_1 indeed.