Atomic transfers / transactions instead of bundles


IOTA uses the concept of bundles to create transfers. Bundles are a series of transactions that are linked together using their trunk reference.

These transactions have a fixed layout and a fixed size independently of their “content”. Since the signature of value transactions does not fit into a single transaction, we usually have to use at least 3 transactions to create a simple transfer: 2 transactions for the input + their signature and a transaction for the remainder (without a signature).

I want to use this thread to discuss a possible change of this mechanism and also discuss the reasons for this proposed change.

I want to start with the reasons why I think that atomic, variable sized transactions might be the better choice post-coordicide and then also discuss the aspects that are usually considered to be arguments “for” the current approach.

Reasons to use atomic, variable sized transactions instead of bundles

  1. Less network overhead: The transaction format can be adjusted to only carry the information that is really required. We can completely omit sending a lot of unnecessary information for the consecutive transactions of a bundle (2nd signature transaction + remainder). Especially the use of UTXO post-coordicide will most probably make this situation even worse and force us to use even more transactions to create a simple transfer.

  2. Less signature verifications: Post-coordicide, every transaction has to contain the node id and the signature of the node that issued the transaction. This means that for a simple transfer we need to verify the signatures of at least 3 transactions. Considering that the signature verification is the most expensive part of the transaction processing, this means that the introduction of node ids will reduce the performance of nodes by at least 300% if we stick to the current approach (and it gets even worse for bigger bundles). This essentially means that nodes will be able to process hundreds maybe even thousands of transactions less than they could with atomic transactions that are not split into bundles.

  3. Better spam-protection and congestion control: The size of the bundle is not known until we have received the last transaction. This means that it can happen that we accept and route a certain amount of transactions, to then later detect that the issuing node has exceeded its quota and drop all additional transactions. This means that we have routed and processed messages that could have been filtered right from the start if we would have seen that the issuing node tries to send a transfer that is too big. This might even open an attack vector where a node issues different bundles to different people that will all start to process the transactions of the bundle and then drop them at different times again unnecessarily increasing the load in the network.

  4. Shorter Merkle-proofs (for sharding): The Merkle proofs for inter-shard transactions get much smaller (at least 300%) if we do not have to traverse all the transactions in a bundle to reach the next transfer. This will make inter-shard transactions much faster and more lightweight.

Reasons to use bundles instead of atomic transactions

I now want to discuss the reasons that previously led to the design decision of having bundles instead of variable sized transactions. I will write a short opinion to every point. If you feel like there are arguments missing or have a different opinion that challenges my view, feel free to answer accordingly.

The following list of arguments is taken from the corresponding stack exchange thread that asked for the reason of having fixed-size transactions and the concept of bundles:

  • Smaller codebase for transaction processing (if implemented in software)

Since one of the biggest bottlenecks of IOTA is the huge size of transactions, we have started to implement an on the fly compression for the gossip layer. This makes the codebase much more complex and kind of negates this argument.

  • Fewer logic gates for transaction processing (if implemented in hardware)

Same argument as above.

  • Higher security (because of a lower chance for buffer overflow type bugs to happen)

While I agree that it is “easier” to not make mistakes, well written code should not have these kind of bugs and the code base is quite complex anyway. So reducing the complexity of one part does not really “solve” anything if other parts need to deal with dynamically sized data anyway (especially considering the on the fly compression).

  • Reduced resource consumption (because of absence of issues like heap fragmentation)

This is actually a valid argument but it is again negated by the on the fly compression which is anyway necessary to achieve any meaningfull network throughput.

  • Better load balancing (because transaction processing is more predictable)

This argument is not valid in the presence of a rate control and congestion module (see arguments above).

  • Atomic transmission over a physical medium (because packet size can be equal to transaction size + service data)

IOTA transactions are bigger than the default MTU size so this would only make sense if we could shrink the size of a transactions below 1500 bytes.


I do understand where the original ideas of using bundles are coming from, but I think that the changes that are necessary for coordicide with performance impacts of multiple 100% justify a change of this mechanism.

At the same time it gives us much more flexibility regarding things like meta-transactions and will allow us to fully utilize the network layer without wasting any unnecessary bandwidth.

Since the gossip layer does not need to compress things on the fly anymore, we will also see a much smaller codebase for both hardware and software.

Please discuss! :slight_smile:

One thing that CfB told me a few times was that the fixed tx size was meant for IoT devices. Hardware becomes immensely easier when it only needs to process fixed size structures.
Which is why he was always hammering on not pandering to humans because we are creating a protocol for IoT. Which we are constantly ignoring. Which is why I took his place in constantly bringing this up.
We are making more and more design decisions that have not been thought through w.r.t. IoT because we mostly focus on processing value txns on server class machines.

Bundles can have an arbitrary size and to validate the signatures of transactions, we need to calculate the bundle hash.

This means that we need to concatenate a variable amount of bundle essence hashes before being able to check the signatures. We therefore anyway need to deal with variable sized inputs. I do not see how fixed size transactions would allow us to not have to deal with variable sized inputs.

In addition: Any router is usually “hardware” and deals with variable sized inputs all the time. Why do you think that hardware can only handle fixed size inputs? That sounds really strange.

Maybe we can get some clarification of the hardware guys here as I am no expert.

The vanilla tangle of the original white paper didnt have node ids, so the named problems didn’t exist there and the decision o have bundles made much more sense.

It’s a variable number of fixed-sized data structures. So yes, it’s variable, but it’s also fixed-size.
It’s also the reason Qubic only handles fixed-length trit vectors. Once translated to hardware you’re committed to a certain size. And yes, a router handles different sized inputs, but with hardware specifically geared to handle that, which is way more complex than it would be if the input sizes were the same. CfB wanted to keep IoT processing as cheap as possible so you would not need the equivalent of a CPU with memory management on a chip. It’s a trade-off decision.
We would need to pick CfB’s brain more to find out why he thought it was necessary for tx layout to be fixed, because I agree that a node processing txns has a lot more occasion for dynamic-sized structures. So maybe we should ask David or CfB to give us some answers here.

Hardware was definitely in mind when going for fixed sized transations. It makes sense, however any advantages are severely diminished by the fact, that bundle doesn’t have fixed size and require dynamic iterations for processing anyway
Still, bundles seems to me good as atomic unit and I see strong argument for bundles, but I think it must be thought through again. Bundles consisting of variable sized polymorphic transactions seems to me right idea. Here are my arguments about bundles as lego of transactions:

  • in one bundle we can put as many different transfers as we want. All will be one atomic transaction. This is awesome feature, especially because any data payload will be confirmed only if all other txs will be confirmed
  • polymorphic transactions (those with meta-tx layouts and own validation rules) would bring specific ledger constraint to the bundle. For example chain transactions contrains the bundle of any complexity so that it wont be confirmed if one chain transaction is conflicting. Value transactions and transfer bundle validation rules are special case of ledger constraint. This way we may bring specific ledger constraints and combinations of it into the bundle and possibilities are limitless (this resembles what is in Radix)
  • several parties may transact in atomic way: one party makes part of the bundle, another party finalizes the bundle by adding own transactions to it to make the bundle consistent update to the ledger state
  • Example 1 of multiparty atomic bundles: in current spec of assemblies (smart contracts) requestor creates its own part of the bundle which includes reward txs and input data. Then business logic of SC (computations) adds another part of transactions: with calculation results, reward harwesting txs and so on. In the end, majority consensus finalizes the bundle by adding chain transactions with chain constraint into the bundle. After posted, the bundle (request+result) would be confirmed atomically if all constrains are valid: it gets confirmed as traceable calculation, or it never appears in the confirmed state.
  • example 2: conditional transafers between parties recently discussed by Hans.

How would that be different with atomic transfers? A value transfer for example has a list of inputs, a list of outputs and the signatures. All 3 of these building blocks are variable number but fixed size and therefore completely equivalent to the way you construct bundles.

It doesn’t make any difference at all if you “logically separate” them into separate transactions (apart from the fact that you force every transaction to have all kinds of “fields” that are not required based on what you are using the building blocks for and therefore wasting huge amounts of bandwidth for garbage).

All of this would not be affected by making transfers atomic. A transfer contains several “transactions” in the same way as a bundle contains several transactions.

Hans I agree with the waste of bandwidth. Which I am sure CfB considered, too. Hence my appeal to ask David/CfB about their reasons why this choice was made in the first place.
Why not allow flexible txns with sig field size depending on security level, for example?
Why not allow have the txns that do not need a sig omit the field?.
Indeed, why not take MTU into account?
And in case of ZVTs (data txns) that flexibility would make sense even more.

Then it is a bundle, why change the name? I don’t get why “atomic transfer” anyhow different. It seems it is not defined enough. Transaction is vertex on the tangle, bundle is atomic batch of transactions.

I am all for variable sized transactions bundled into atomic bundles. For the reasons you mentioned

Its essentially still a bundle but instead of being constructed using a bunch of transactions with exactly the same layout, you will have a “bundle header transaction” (that contains the issuing node id + signature and the size of the bundle), a list of “input transactions” (that define the utxo inputs), a list of “output transactions” (that define the addresses where the funds go) and a list of “signature transactions” (that contain the signatures for the used inputs).

You can still call it a bundle if you want to …

The point is that now not every transaction needs to contain the node id over and over again but instead we only need the node id once.

I see, it is all about repeated fields, like node id or bundle hash. I agree it must be optimized.
Same time, “normalized” and granular structure of the current bundles has something elegant in it.
Node id we need for mana, right? The possibility would be adding special type of mana transaction with node id into the bundle. It only would be valid if mana is equal to sum of the transfer. Actually, presence of it in the bundle would be enough and the mana transaction wouldn’t be even present in non-transfer bundles.
Only trunk, branch, tx-type and maybe some other fields would be mandatory and common for all. The rest depends on tx type/layout.
This way we can build any type of bundle headers
EDIT: I re-read the above and realized we are on the same page actually. I only want to keep name “bundle”, to reflect “lego” nature of the construction: constructing the bundle transaction-by-transaction. Also, I want to see several signatures of several types on the same bundle, with different purposes in different combinations. For me the “transfer” is just a specific type of update of the ledger state, among others.