Coordicide Dust Mitigation

There have been several discussions regarding dust, particularly post coordicide, and I thought the following might be interesting. In this post, we view dust is a rate control problem where the database size is a resource whose access must be managed. Thus, we can use the congestion control algorithm to prevent database bloat. If the following idea works, we could prevent dust attacks without any sort of annoying deposit scheme.

The congestion control algorithm scores every message with a a “work score”, and then schedules messages, weighted by their work score unit, at a constant rate. Thus, only so many work score units are scheduled per second. Moreover, mana held by a particular node is proportional to the number of scheduled work score units per second of messages issued by that node. In other words, throughput in the network is completely measured in work score units.

So what the heck is a work score unit? Well, who ever implements the algorithm gets to decide, and they can decide whatever they want. Essentially, the work score can should measure how much of the three resources the message consumes:

  • Bandwidth
  • Computational Power
  • Memory

Thus, implicitly, there are actually three types of work units, bandwidth work units, computational work units, and memory work units, and the work score of a message should be the max of each of its these three work units it consumes.

In this post, we discuss memory work units and how they can be used to prevent database bloat. Here, we basically equate memory with the size of the data base the memory takes. Let D be the size of the current database, and let M be the max size of the database.

The following is my idea. For a message/transaction X, let \Delta D(X) be information it adds to the database, (by convention, transactions which reduce the size of the data base, like sweeping transactions, will have \Delta D(X)= 0, and not a negative number). Basically \Delta(X) is the net size of the outputs created by X. The memory work units of X will be defined as
w_m(X)=\frac{\Delta M(X)}{k(M-D)}
where k is a parameter we set which determines how hard it is to fill up the database.

Doing some calculus, if some node with q proportion of the active mana of the dust attacker, then the database will grow, at most, at the following rate.
D=M-(M-D_0) e^{-\lambda q k t}
where D_0 is the initial size of the database, \lambda is the work units scheduled per second, and t is the length of the attack.

So, dust attacks cannot bloat the database with this method. However, dust attacks will increase the difficulty of adding to the database, i.e. requiring more mana to create new outputs.

How bad is this attack? It takes
t=\frac{\mathrm{ln}(2)}{\lambda q k}
time for the attacker to increase the difficulty by a factor of 2. Its impossible really to make sense of this formula without some numbers, but we can see that the time is inversely proportional to k. Thus increasing k makes it harder for an attacker to increase the difficulty, but k basically determines the difficulty of adding to the database in the first place. So there is really a tradeoff.

Will this approach work? It really depends if we can find a k that is large enough to prevent an attacker from easily increasing the difficulty, and while the k is low enough to keep the network usable. Its hard to say at this point before we implement congestion control and get a sense of the number of work units which will be processed per second.

This approach could also be used in conjunction with other dust protection measures that would make it expensive to increase the difficulty. However, it would be nice not to have any sort of deposit mechanisms.

2 Likes

One comment: I think this approach only works for coordicide. The work score of each message is dependent on how many inputs were added previously, and thus there will be slight discrepancies based on when a message was booked. The congestion control algorithm can handle these (very small discrepancies) since the work score basically describes how long the node need to wait before processing a message.

In Chrysalis, if you tired to set the PoW difficulty based on the size of the database, these discrepancies become a problem. PoW difficulty has to be judged objectively: either the message has enough PoW and is accepted, or it doesnt and is rejected. A message with a borderline difficulty will thus fork the network.

Interesting approach Billy!
So it seems that effectively the attacker can enforce the level of minimum hardware requirement. One down side of this approach seems that although only low value txs can be the cause of the problem, the penalization is systemic. An adversary that uses a subclass of messages (dust value txs) can penalize an entirely different class of messages that have no role in the cause of this problem (normal value txs, data messages)

Maybe it would make sense to rather introduce a tiered system, where writing to the ledger is disproportionally penalizing the difficulty of the culprits - i.e. dust value txs. Of course the attacker can still increase the minimum hardware requirements for issuing dust sized txs but thats it - normal issuers wouldn’t be affected

That is an interesting idea. We specifically allocate, say half of the database to dust, and half to non dust, and then apply this solution to each part.

1 Like

That sounds really good! Looking forward to the solution.