# Computing YTSRI and OTSRI in Chrysalis

I made this post private since I wasnt sure if the chrysalis specs were public yet.

In chrysalis, we need to compute YTSRI and OTSRI efficiently. For definitions, read this first.

A primer on Boolean algebra representations
Let X be a set of size n and A be a subset. Let V be the vectors of 1s and 0s of length n. Fix a canonical ordering of X=\{x_1,x_2,\dots,x_n\}. We can represent A as a vector v^A\in V : let v^A_i=1 if and only if x_i\in A. Similarly, every vector v\in V defines a set A^v\subset X: v_i\in A if and only if v_i=1. Thus V is a representation of all the subsets of X. (Fun fact: this representation endows the subsets of X with an \mathbb{F}_2 algebra structure!)

Intuitively, this representation is defined by vectors containing the Boolean values answering the questions “Is x_i in A?”.

Set theoretic operations on subsets of X turn into pointwise logical operations on V.

• A\cup B corresponds to pointwise OR operations
• A\cap B corresponds to pointwise AND operations
• A\setminus B (set deletion) corresponds to pointwise AND \neg operations (e.g. true AND \neg true=true)
So we can abuse notation slightly and apply the operations \cup, \cap, and \setminus directly to vectors in V.

Furthermore, consider a bit mask representation of a set. A bit mask effectively represents a set as a vector of 1s and 0s in the same way described above. Thus, we can apply the operations \cup, \cap, and \setminus to the bit masks using the logical operators discussed above.

Preliminaries
To perform the algorithms, a node will need to store the following information. In this document, unconfirmed means not approved by a milestone.

• Every transaction must be stored with with their YTSRI and OTSRI
• The recent cone of a transaction is the unconfirmed transactions in its past cone. Every transaction must be stored with a bitmask representing the recent cone. For a transaction tx, we denote the bitmask by tx.recentCone.
• A bud is a transaction whose recent cone is simply itself. We maintain a list of buds called budList. This list can also be represented as a bit mask.

Incoming transaction
First, we indicate how to update all the above information for an incoming transaction tx which directly references transactions x an y

1. The YTSRI and OTSRI of tx are respectively the maximum and minimum the YTSRIs and OTSRIs of x and y.
2. tx.recentCone=x.recentCone+recentCone+ hash (tx) where hash (tx) is the bitmask of the singleton transaction 1.
3. If x and y are both referenced by milestones, then add tx to the bud list.

One can easily check that these actions are correct. Moreover, they are computationally cheap.

Incoming milestones
When a new milestone arrives, updating all these fields becomes more tricky. Moreover, we want to limit the number of times we need to traverse the tangle.

We can actually update this information with two traversals of the unconfirmed transactions. Let newlyConfirmed be the bit mask of the transactions whose milestone index is the last milestone. These are precisely transactions that were confirmed by the last milestone. This vector can be easily computed while the node is calculated the new ledger state under white flag.

First, we rest budList to the 0 vector (i.e. the empty set). In the first traversal of the unconfirmed transactions, we do the following to each tx

Set NewRecentCone=tx.recentCone\newlyConfirmed
If NewRecentCone does not equal tx.recentCone, then set tx.YTSRI=current Milestone
Set tx.recentCone=NewRecentCone
If  size(tx.recentCone)=1, then
Set tx.OTSRI=min{tx.trunk.milestoneNumber,tx.branch.milestoneNumber}


1. YTSRI
3. The bud list
4. The OTSRI of each bud

Next, we retraverse the unconfirmed transactions using the bud list to update the OTSRI. Specially for each unconfirmed tx

Set tx.OTSRI=min{x.OTSRI | x in tx.recentCone INTERSECT budList}


This algorithm produces the correct OTSRI because we can prove that every unconfirmed transaction contains a bud with the same OTSRI.

Alternatives
The second traversal might not be strictly necessary. First, a node could just store the milestone number that each OTSRI is computed by. If the OTSRI is out of date when some application needs to use it, it can be updated on the fly.

Alternatively, both traversals can be done at once if the unconfirmed transactions are traversed from past to present: specifically we only process a transaction if we have passed through its entire past first. This way, all the buds in a transactions past cone would have already been identified. However, I cannot think of an efficient way of finding such a traversal.

Resource consumption
First, let us discuss memory. Let N be the number of unconfirmed transactions. We will need bit masks of size O(N) to prevent collisions. Note that even though the total number transactions that get stored in bit masks through out time is unbounded, only N transaction at a time.

Since each unconfirmed transaction will need its own bit mask, the memory consumption is O(N^2).

Since the order of each traversal doesnt matter, we dont actually need to walk the tangle. Thus these algorithms involve O(N) look up operations. Similarly, we will perform O(N) pointwise logical bitmask operations, yielding O(N^2) logical operations.

Overall, these algorithms wont scale indefinitely. However, it should handle a TPS in the 100s.

Open Questions

1. What happens if there is a collision in the the bit mask? Will it cause a small aberration in the tip selection algorithm, or will it cause a network split?
2. What is a good upper bound for N? What happens when this upper bound is violated? Can the node always just allow N unconfirmed transactions?
3. How big do the bit masks need to be? This is dependent on the previous question. If occasional collisions can be tolerated, then the bit masks can be much smaller. If not, we need large bit masks.

Instead of writing a new post, I thought I would just write this here.

Determining lazy/below max depth essentially comes down to computing the OTSRI which is expensive to do. I propose an alternative definitions which would be easier to compute.

In this new definition, we say that a tip below max depth (respectively lazy) if upon arrival if one of its parents its either (A) unconfirmed and below max depth (resp lazy) or (B) is confirmed and has milestone index less than current milestone-15 (resp 7).

These definitions of lazy/below max depth do not change with incomming milestones: if a transaction satisfies below max depth upon arrival, then it always will. Since this purely recursive definition is very easy to check for incoming transaction, the computation problems become much easier.

Unfortunately, the below max depth criterion wont be as objective as it was before. Indeed, depending on the arrival of the next milestone, not all nodes will agree on what is below max depth. For instance, suppose transaction x directly references a confirmed y with milestone index current milestone-14. If x is issued around the same time the next milestone is issued, any node who receives x first will think it valid, and any node receiving the milestone first will think x invalid.

Note in this situation, I think only “false positives” would occur: since the coordinator will of course receive the milestone before x in the above example, then x will be considered invalid by the coordinator, but valid by other people.

Please check my logic on this point, but I think for snapshotting purposes, false positives are better then false negatives (I only want to snapshot things which valid transactions cannot reference).