Proposal: logical layout of the variable-sized transactions

Issuer Id: Public key of issuing node

The Unified Identity Protocol defines a DID (Decentralized Identifier), which can be resolved (Points to an IOTA address) to a DID Document containing a list of active public keys associated with the Identity. If we adopt this for node identities, they can switch public keys in case of a hack or whatever other reason there is to change keys (Increase security?). What could happen is that nodes track node DID’s, which might list two public keys: One which they use to actively sign transactions and one which is authorized to sign a node DID transaction that changes the other public key. The latter key can be stored offline and allows node identity recovery post key exposure without losing all the mana which has been gathered.

What would need to be changed:

  • Issuer Id inside the tx layout holds a DID instead of a public key.
  • Nodes actively cache the current public signing key per node DID (Calling it DID Ledger)
  • Nodes update the DID Ledger when a transaction happens on the DID associated address, which is signed with a valid signature. (“NodeIdTangle”?)

Payload section

If I am reading this correctly, payload sizes are going to be dynamic. So I assume transactions only containing a hash have a small payload and bundles are no longer needed. Why are we still thinking in blocks though, can’t a payload be defined in bytes/trytes? Secondly, if payload sizes can differ immensly, shouldn’t Issuer Transaction Counter for rate control be based on the size of the transactions instead of the number? A transaction of 1024 bytes and 8192 bytes should not have the same effect on rate control, right?

Just my 2 cents, let me know if there is any interest in using DID.

This is a good question. The transaction counter is actually helps gossiping/ processing. The rate control will definitely account for the size of transactions. However, measuring the rate of transactions issued by a node will be easier if you are sure you have all the hitherto issued transactions. Thus, a transaction counter helps.

The reason is that IOTA was planned to have one Tangle. One may discern different subtangles, however they can possibly merge together anytime. This allows mobile things to continue to interoperate as they travel through neighborhoods. All things should implement a standard consensus protocol that utilizes bandwidth in best possible way, in a battle against CAP theorem - Wikipedia.

@hans.moog I see that your transaction design doesn’t suit IOTA. One reason that makes me think so is bandwidth utilization. Final design allows to omit transmission of payload and/or essence (first two parts of a tx) when are not required to reach conclusion about consensus. This is because sufficient info to make decision about fetching (or not) can be set as the first 3 trits of transaction hash.

Also note that trinary hash is practically better than binary hash. The latter would require 4 bits, making the puzzle harder.

What is the reason for rejecting final design?

Not sure if I understand correctly, but are you saying why do we need blocks, instead of just one byte array? Blocks, each with own type and ledger semantics, are needed for modularity and extendability of the protocol.

You should try to read again and try to understand what the “ontologies concept” means. The “different” tangles that I am talking about are not separate DAG’s but merely “different perspectives” on the same underlying data structure (maybe I should write a post about it).

What would be the use case to not catch a transaction? And why would that not be possible with atomic transactions? You can reserve a trit in the header section as well and drop the packet as soon as you see the corresponding trit in the header?

Not sure what you mean by “header”, 3 trits are needed to mark head, tail & request type. Those have to be in transaction hash for integrity and laziness. Even if header is hashed you still need to fetch it.

What would be the use case to not catch a transaction? And why would that not be possible with atomic transactions?

I didn’t say it’s not possible, I claim it doesn’t scale according to Amdahl's law - Wikipedia.
In final design there is no concept of bundles, this allows validating consensus & ledger regardless of transaction order. Bundles or atomic transactions require ordering which doesn’t scale. I recommend to adjust the design, taking Amdahl’s law into account.

Btw, another remark about necessity of trinary hash: H(0) = 0. This allows to have arbitrary number of data txs with null essence within a bundle at 0 cost, because they can be skipped in bundle hash calculation. Even do signature validation in case bundle contains missing data txs, increasing degree of parallelism further and (in)validating subtangles earlier.

What are you talking about? You can not even verify the signatures if you don’t build the bundles first. And to build the bundles you have to sort the transactions of the bundle.

How is that different from atomic transactions apart from the fact that your “final design” wastes a lot of “space” for fields that are not even required in that part of the message and for “references” to your other parts of the message? It’s horribly inefficient and makes the process of buildings the bundles first unnecessary complex.

A Bundle consisting out of 10 transfers needs to be hashed at least 10 times using non-atomic transactions. And an atomic transaction needs to be hashed a single time. Who cares about a slightly more efficient way to calculate the bundle hash? 10+[something small] is still much bigger than 1.

You can not even verify the signatures if you don’t build the bundles first.

Sorry, by “consensus” maybe I wasn’t clear. In this segment I meant to validate attachment part and essence. One may check sigs too, if all txs that affect bundle hash are fetched. It’s trick with flags in transaction hash. I can explain details about flags, if you want.

I don’t understand your second question at all, please reword before we continue. I have an argument that final design is orders of magnitude more efficient, but I want this debate to be constructive, and avoid two parallel monologues.

@chrisd what do you mean with “final design”? do you have any description of it?

@samuel.rufinatscha It’s CfB’s final design, the one used by IcT. It’s called final because IoT devices can’t be updated after standardization and deployment in IoT.

Okey thanks @chrisd, good to know that you’re talking about CfB’s Ict transaction design.

You wrote:

In final design there is no concept of bundles

There is. By the time the Omega team was still up, Lukas and I had a discussion with him about bundles. CfB’s idea was that bundles could be built by swarms. To say there is no concept of bundles, is therefore not entirely correct.

Here is the Ict documentation, which describes also the bundle-transaction layout:
https://github.com/iotaledger/ict/blob/master/docs/ARCHITECTURE.md

I agree with this, perhaps I meant there are no bundles as defined in the protocol we use now. In fact we can theorize about bundle concept in our heads, code doesn’t need to care about it, only process transactions.

That’s not correct - flags for the bundle head and tail are encoded in the transaction hash. Bundles still exist. Otherwise you could not do value transfers.

1 Like