Specifications on Timestamps

This post specifies specifically how we would vote on timestamps. We set some preliminaries.

  • We pick parameters \Delta and w.
  • Let D be the network delay and assume that w>2D.
  • We use a binary voting protocol with a passive option.
  • Let \tau(x) denote the timestamp of a transaction x.

First, a transaction x if for every z\in \downarrow x,
\tau(z)>\tau(y_i) and \tau(z)-\tau(y_i)<\Delta
for each parent y_i of z.

Next, the opinion on a transaction will be a pair (b,\ell) where b is a boolean value with 0 “bad” and 1 “good” and \ell the level knowledge. Specifically, nodes set the opinion on a transaction in the following way.

  • (0,3) if |\tau(x)-t_{\mbox{arrival}}(x)|\ge w+2D
  • (0,2) if w+D\le|\tau(x)-t_{\mbox{arrival}}(x)|< w+2D
  • (0,1) if w\le |\tau(x)-t_{\mbox{arrival}}(x)|< w+D
  • (1,1) if w-D\le |\tau(x)-t_{\mbox{arrival}}(x)|<w
  • (1,2) if w-2D\le |\tau(x)-t_{\mbox{arrival}}(x)|< w-D
  • (1,3) if |\tau(x)-t_{\mbox{arrival}}(x)|< w-2D

If the opinion is held with level 1, the node participates in the vote entirely, both querying and responding to queries. With level 2, the node only responds to queries and does not change its opinion. With level 3, the node does not participate in the voting at all.

Most honestly issued transactions will have timestamps with opinion (1,3), and no voting will take place. Moreover, group of nodes can try to convince everyone to reject these transactions (since no honest users would even participate in the vote).

This timestamp proposal solves both aspects of the max depth problem. First, it prevents people from attaching transactions to very old transactions. Second, there is a limit to the number validations any new transaction would require: indeed, since \tau(x)>\tau(y) when x references y, it implies that the transactions on any incoming chain get older. So when solidifying, if the past cone is too big, it will contain transactions with bad time stamps, and can be rejected.

I think that since the network delay becomes a number to be used in this kind of system, would be nice to have at least some idea on how to estimate it, since it is not known. Differences in perceptions of network delay could lead to some “threshold” cases.

If I recall the discussions in the Hague, according to the engineers, the network delay is significantly less than .5 seconds, and so setting w=5s would be very safe.

But yes, this is a number that needs to be well understood, both for this proposal and the FCoB decision rule.

Well, if we are dealing with this order of time should not be a problem.

The perceived network delay can individually be off by an arbitrary value (seconds, minutes, hours or even days) and is just an “average”. Attackers can DDOS nodes or nodes can just randomly loose connectivity.

While I like the “levels of knowledge” idea, I think that you can not use them to make “hard decisions” as in allowing nodes to stop voting on things.

What might work however is that you adjust the thresholds required in the voting process based on the perceived difference between issuing and arrival time.

All of our proposals use the network delay however, particularly when we use the FCoB rule to decide conflicts. So if this not a realistic assumption, then we need to rethink many of other things.

We use them to form initial opinions (if we like or not) - not to define hard cutoff thresholds for the voting. It is totally fine to do the first but it is not okay to do the latter.

If a node can be DDOSed to have a large delay, won’t this be a problems anyway? I don’t see the block of large desyncs being a problem, but more of an indicative of a problem. The question becomes which protections our nodes have from DDOS and which action they can take if they realise they are in such a situation.

Sure it’s a problem :stuck_out_tongue: But if you don’t make “decisions” during the times of network outage then you can also not finalize a wrong decision.

If you however make decisions “based on network delays”, then its extremely easy to “kill” a node. And of course you could say that you add some kind of protection mechanism but if your node is really overloaded, then he won’t even be able to make decisions any more :frowning_face:

Plus you would have to define the rules for these kind of decisions.

The problem you are describing here is exactly the difference between the Asynchronous Network Model and a weak Synchronous Network Model. In an asynchronous setting you assume that any message can be delayed (by an attacker or just “bad luck”) for an arbitrary amount of time.
In the Synchronous Network Model, where a global-bound \Delta exists such that whenever a message is sent by an honest node, it is delivered after at most time \Delta. (Note: crashing nodes or nodes under a DDOS attack can still be considered in this, but count into the number of malicious nodes.)

This is kind of the base assumption we have! It actually makes a lot of sense to consider the bounded model (and we have been doing exactly this mostly so far), since in the fully asynchronous system many steps need to be completely reconsidered and there are several impossibility results in this setting.

Wolfgang I agree with your descriptions and it makes a lot of sense to use the weakly synchronous model to mathematically “analyze” things like the voting mechanisms and so on because nodes will “on average” behave in the expected way.

But it is one thing to use a certain model for mathematical analysis and another thing to assume that this is a perfect representation of the real world that always holds and can be used to define hard thresholds for any kind of decisions.

Defining a propagation-delay based rule on finalizing decisions will allow anybody to kill the whole network by DDOSing some of the large mana holders for a few seconds while a conflict is propagated.

We can not ignore reality and assume that DDOS attacks do not exist - especially if the IPs of nodes are public.

Please note: Making decisions based on a certain assumed propagation delay is not the problem - only making irreversible decisions that would lead to a node falling out of sync even when there is just a single time where this assumption doesnt hold, is.

I understand your concern, and I am not sure what to say. Even with voting on conflicts, dont we have this same problem: we set out initial opinion on conflicts with the FCoB rule “have I seen any conflicts in before arrival time +c?” Here c must be bigger than the network delay, otherwise we can the voting protocol out put “like” on two transactions. This would have profound implications on the TSA. Indeed, it would force us to go back to the white paper idea of using the TSA to choose the heaviest branch in some sense.

Also FPC itself would break under a DDOS attack, right? If I am an attacker, with say 5% or 2% of the mana, all I have to do is DDOS enough high mana nodes till I have 15% of the mana, and then I can break FPC.

It should not be a problem if the voting layer has a part of the nodes liking 1 transaction and another part liking the other transaction. Then neither of the two options will reach a supermajority in the voting and we will essentially drop both. Since the FCoB rule is only used to form an initial opinion, this should not have any bad side effects. Nodes that are under a DDoS attack will most probably not even be able to query others and therefore also not finalize any decisions. In the worst case, people might have to reattach some transactions because they attached to transactions that became liked due to FCoB rule (and consequently started to be used by the TSA) which then anyway turned out to be disliked during voting.

This by the way also does not mean that we can not vote on timestamps - voting on timestamps is definitely fine. But you will always have to vote because you otherwise might make a different decision locally just because you lost internet for a few seconds or your server crashed or … (it doesnt have to be a DDoS attack).

Regarding your other attack where somebody takes out large mana holders to boost its own influence in the voting: That is indeed an attack vector but ideally you wouldn’t have the mana concentrated on just a few nodes. DDoSing a hand full of nodes is possible but DDoSing say 100 nodes at the same time will be sufficiently hard.

This is by the way the reason why I actually like sublinear mana benefits. It incentivizes large mana holders to split identities and spin up more nodes. This increases the “decentralization” of the network.

Guys, what is clear, is that we need to test all that on a “practical” network…

the high mana nodes are supposed to be not so easy to be ddosed out :slight_smile:

So now I am confused on the conversation. Are we worried about someone DDoSing a large number of nodes to increase the network delay, or are we worried about an attacker just throwing a few nodes out of sync. The latter seems to not be a problem (the node will know it is being attacked and will resynch).

This is actually not quite right. If 30-40% of honest nodes initially vote yes for two conflicting transactions, then with high probability they will both be accepted which is intolerable in our current approach.

So in conclusion, if an attacker can arbitrarily increase the network delay, then we have a problem regardless of what we are voting on.

This does seem correct. Also, we need to see what the rate control produces. Understanding the exact gossip protocol will allow us to understand 1) what is the network delay 2) how susceptible the delay is to attack

We are first and foremost worried about particular nodes being affected (not the network as a whole). It does not have to be an attack, it can also be a short network outage where the internet breaks down for a few seconds (just enough for the node to finalize on a wrong decision). But you can of course also have larger network splits that affect bigger sections of the network.

A node can not easily detect this. If you are however talking about proactive out-of-sync detection, then I would need to see the specs for that as it sounds like this would be another consensus mechanism on top of the existing ones?

Wh0t? FPC doesn’t require a super majority of votes?

Ok based on our slack discussion, maybe this isnt quite right.

But we still do have a problem. We need an upper bound on the network delays to make the levels of knowledge work with the votes. If we cannot use these, then we have to run FPC on every single transaction.

Why would you need to vote on every transaction? You just vote on transactions that you dislike - either because they are conflicting or because they are below max depth?