Last week, an incident occurred in TON - a huge load on one of the addresses created a queue of transactions, the blockchain itself withstood the load - the sharding system worked, but nevertheless, due to the queue, as well as as a result of the load on the indexers, the blockchain was paralyzed for a couple of days until the line “diverged”.
Such load was caused by a new type of token TON-20, an analogue of BRC-20 from the Bitcoin network. There is almost no information in the TON-20 docs, except that the message format copies the BRC-20. If you look at the blockchain explorer, a person familiar with smart contracts development will be surprised - a bunch of ordinary transactions with json inside messages.
But since this is some kind of fungible token, then there should be some way to get the balance and check the transfer of tokens and so on?
I will try to answer these questions in this article. We will look at how BRC20 “tokens” work and why they lead to problems for blockchains and blockchain network infrastructure projects.
Place in transaction
BRC-20 is the concept of a fungible token in the Bitcoin network. The concept emerged after the success of Bitcoin Oridinals, a protocol that allows the creation of NFTs on the Bitcoin network. This became possible thanks to:
OP_return - A function that adds an additional output to transactions on the Bitcoin network that contains third-party information (Inscriptions), such as metadata, but does not contain funds. Initially, the data volume in OP_return had a limit of 80 bytes.
An update to the Bitcoin Taproot network that increased the amount of information in OP_returen to 400 KB.
Thus, with a transaction in Bitcoin, it became possible to send enough information to implement the token logic.
Looking ahead, I will say that the author of the BRC-20 concept notes that this is an experiment, and not the best practice, which has the right to be. But as soon as money flowed into such tokens, everyone forgot about it.
BRC-20
The idea behind brc-20 is to send three types of metadata with a transaction:
Deploy - for the initial creation of a brc-20 token
Mint - to receive an already created brc-20 token
Transfer - for transferring such tokens
Metadata example:
And now the most interesting thing is how the “protocol” tracks how many tokens, for example, I can transfer to my friend: A wallet that can process such a standard checks all transactions and thus understands whether the transaction is valid. Thus, this is a certain chain of transactions, within the blockchain there are also transaction chains.
The same is true with the balance; only records are stored in the blockchain itself, and all other logic is moved offchain to wallets or services compatible with brc-20.
It’s difficult to even call such mechanics a digital asset, which is why some developers believe that transactions with brc-20 should be considered spam. And filter them accordingly so that they do not clog the mempool.
So that you understand the whole concept is described in just a couple of pages: https://domo-2.gitbook.io/brc-20-experiment/
At the same time, at the time of writing, in the Bitcoin network there are more than $1 billion of tokens with a total capitalization according to https://www.brc-20.io/. A full-fledged market, that’s the power of marketing.
House of cards
Why does such a primitive concept lead to problems? Let's look at the example of TON with its copy of the TON-20 concept.
As you can understand from the concept, in general, it does not matter where to send transactions, since only the fact of a transaction with the data is important, then everything is offchain. Accordingly, if we want to create a wallet that works with TON-20 or some kind of indexer site for this concept, then we need a powerful blockchain indexer, according to which we will receive the balance and generally count which transactions are valid and which are not.
And here lies an important detail: if we work with ordinary fungible tokens, then the depth of requests (how many blocks ago) is small. The latest states of smart contracts store balances, and the logic inside the smart contract does not require parsing a bunch of addresses to understand whether Petya can transfer tokens to Alice.
In TON-20 and BRC-20 it’s the other way around; constant deep queries are needed for a large number of addresses. All this creates a heavy load on indexers.
You say, well, you can collect your own indexer and immediately present the data in a form convenient for collection - for each block, count all transactions, mints and deployments of tokens. But in reality, creating such an indexer is a difficult and expensive task, so public indexers with APIs are used.
What's a simple way to reduce the load in this configuration? Throw all transactions in the blockchain to one address, because we don’t care where the transaction came from, only what’s in it is important. Thus, to obtain data about a token, you will simply need to request all transactions of one address, and then count them and do not need to index the entire blockchain.
This is exactly what the company did when it transferred the BRC-20 to TON. They suggested sending all transactions from TON-20 to one address - the burning address, it looks like this:
You can watch it here: https://tonscan.org/address/EQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAM9c
But the problem with this approach is that blockchains are not designed for a huge number of transactions per address, even such as TON. TON holds the record for the number of transactions per second - about 100,000, but not per address, this is an important nuance. Also, some indicators used by many services on TON were not ready. Thus, a queue of transactions was formed, which pulled the services on the blockchain with it.
The last layer of the house of cards, which aggravated the problem when there were many such transactions to one address, were validators, the same ones that support the network; it turned out that many used weak hardware - sometimes it’s easier to pay fines than to constantly rent expensive hardware.
All this led to a transaction queue, yes, the blockchain survived, and the queue resolved quickly, but nevertheless, it’s funny how some messages in transactions around which offchain built logic led to such problems.
Some thoughts
The problems that arise from such technically strange solutions lead to the difficult problem that blockchain creators face. They have to create not just a technical platform that is easy to change manually in case of problems, but an ecosystem where it is difficult to align the incentives of the participants creating their products within.
If you look at the problem that arose in TON, you can see that no one had malicious intent. Everyone acted economically rationally. And in such a situation it is easy to blame that, for example, indexers must carry a huge load. But if we put ourselves in the shoes of the owners, we will understand that the economy simply will not work out if we keep a bunch of “hardware” in reserve just in case.
The answer to the problem of coordinating economic incentives can be found in economic theory - competition, well-organized access to resources, such as grants and investments. Will TON Foundation be able to cope with this? Nobody knows, but one thing is for sure - the competition between blockchains takes place not only in the technical part or in terms of the volume of investments in them, but also in the ability to build a competent ecosystem within.
Conclusion
I like the TON blockchain for its technical elegance; at least it’s not another copy of Ethereum, which is being overclocked with the help of a lot of capital without looking back, and in general why the user needs it. If you want to learn more about the TON blockchain, I have open source lessons that will teach you how to create full-fledged applications on TON.
https://github.com/romanovichim/TonFunClessons_Eng
I will be glad to see your stars on the repository. Also I post new tutorials and data analytics here: https://t.me/ton_learn
Top comments (0)