Latest

Understanding the Prospects and Challenges of Tron, Solana, and ETH2 in the Crypto Market

Upon analyzing the latest trends concerning Tron, Solana, and ETH2, it becomes clear that many of the predictions have already been successfully implemented.

However, certain aspects can only be noticed through numerical designations. To provide a better understanding, it would be helpful to present a few tests and sample circumstances for assessment.

Before delving into the specifics, let’s first discuss a few general points that are crucial to the prospects of these cryptocurrencies from 2023 to 2030.

It’s important to note that this material is not a theoretical opus but rather part of a series of articles from 2019-2022 on Bits.media. This series culminated in the first and second parts regarding the PoS family.

Examining the upcoming trends in the crypto market is essential for making informed decisions. By carefully analyzing the available information on Tron, Solana, and ETH2, one can identify potential opportunities and challenges.

However, it’s important to keep in mind that not all relevant information can be easily visualized. Sometimes, numerical data and calculations are necessary to get a better understanding of the situation.

With that said, the prospects for these cryptocurrencies depend on various factors, including technological advancements, regulatory frameworks, and market demand.

Keeping an eye on these factors and understanding how they interact with one another is crucial to making informed predictions about the future of the crypto market.

In conclusion, while it’s possible to identify some upcoming trends in the crypto market by analyzing recent developments, a deeper understanding often requires more than just visualization.

By considering both general and specific factors, we can gain a better understanding of the prospects for Tron, Solana, ETH2, and other cryptocurrencies in the years to come.

The Dunning Kruger Effect and PoS

After the main part of the article had already been written, one of the Bits.media chats reminded me of this effect. The definition of the effect is as follows:

“Metacognitive distortion in low-skilled people: they draw erroneous conclusions and make bad decisions, but do not realize these mistakes due to incomplete knowledge, skills and abilities, coming to a false definition of the limits of competence and overestimation of their abilities – even in unfamiliar areas of knowledge and first-time actions.”

This has been happening all the time in the PoS family since 2013. Perhaps the most striking example is Solana, which, despite huge venture capital injections and fan marketing, is still working, as the crypto community aptly puts it, “with a lunch break”. Recent example.

The fact is that little attention is paid to the “solar blockchain” and other similar architectural problems.. There are several reasons for that:

  • Firstly, when attracting investments, projects are focused on positive rather than negative sides, and as a result, it turns into a public roadmap that the project wants to fulfill “by all means”, and thus the project has time to rework the fundamentals does not remain. Counterexample – Ethereum and the transition to Ethereum 2.0: the consistent recognition of the disadvantages of the PoS family made it possible to solve a number of problems already at the start. Yes, for more than six years, but immediately in the test and production network: not all, but many of the declared ones.
  • Secondly, competencies around certain programming groups are only accumulating: this is easy to see in the comparison of solidity coders with RUST / Go, who have just started their journey.. What does it mean? It means that there is often simply no one to evaluate potentially difficult moments – it’s like visiting the red planet for the first time: it is unlikely that Matt Damon coped as famously as his hero from The Martian with the presented set of difficulties.

If you think that this question is purely theoretical, I recommend an interview with Alexander Skidanov, where he, honestly and openly, based on an understanding of the market, talks about changing the strategy and paradigm of Near. Actually, if we move away from the PoS family for a minute, then the problem will be inherited by blockchains of any level and pattern: BCH/BSV are vivid examples.

Crashes vs finalization

If you recall the basic principles of the PoS family architecture, you will find that three parameters are usually discussed, although they are not related to each other (most often):

  1. TPS as a kind of, conditional, benchmark for the operational work of the blockchain.
  2. Finalization as the second corrective parameter.
  3. Decentralization: more precisely, its level (the higher, the better).

Here are some examples to make it clearer:

  1. BSC (BNB chain: about the differences – see. on the link): finalization in 33 seconds, but the level of decentralization of super-nodes suffers with any approach (see. example statistics). On the one hand, this approach makes it possible to quickly roll back after hacks and other troubles, but on the other hand, it differs little from classical finance and, as a result, does not benefit the end participants.
  2. Polygon: if you carefully study all the twists and turns of this system (I hope that one of the future publications will be devoted to this sad chain – almost the only one from the Plasma subfamily), it will become clear: as a result, 150+ confirmations are often not enough, and this everything extremely slows down dApps and makes applications unstable, which is especially evident in the example of cross-chain bridges.
  3. Harmony: if you are “lucky enough” to deploy something on this system, especially through a grant, then you might have noticed that there are simply no many old applications in the testnet, and therefore finalization without correlation with social consensus does little.

ChatGPT to the rescue

But these are all general aspects.. Let’s try to approve them via ChatGPT:

  1. Finalization time: on blockchains with PoS, DPoS, LPoS and similar consensus algorithms, finalization may take some time. Which can lead to delays in transactions and waiting for finalization confirmation. In some cases, this is not acceptable for users who are waiting for confirmation.
  2. Consensus algorithms may be more vulnerable to “51% attacks”: for example, in the case of DPoS, when a small number of participants are able to control a large part of the blockchain, this can lead to the possibility of 51% attacks. In such situations, finalization can be broken, which will lead to a threat to the security of the network.. [Note. Menaskop: here, of course, AI gave a dash, because a discussion of the attacks specified in the first and second parts of the study would have been more suitable, but he captured the essence of the attack through the combination of sybil-like vulnerabilities, so let’s leave it as it is].
  3. Ability to cancel transactions: in some cases, when the finalization is not final, transactions can be canceled. This can happen if someone participating in the blockchain wants to change data that has already been recorded.. In some cases, this can lead to financial losses or other problems.. [Note. Menaskop: and again corrected a little here, because in the previous parts exactly this was discussed].
  4. Lack of decentralization: In some consensus algorithms such as PoS, LPoS, and others, larger cryptocurrency holders have more influence on decisions made on the blockchain. This can lead to a lack of decentralization and a possible impact on the transaction finalization process.. If several large holders agree among themselves to cancel a certain transaction or block, this can lead to the failure of the finalization and create potential problems in the blockchain.. [Note. Menaskop: here AI again mixed the essential, but I leave it as it is, because the decentralization vector is important for the article].

As you can see, I had to slightly correct the theses of ChatGPT, but in general they are correct. Now let’s try to look at the situation from the side of practice and empiricism.

Examples and stories

Of course, it’s worth starting with EOS, a project on which many had the same hopes: in 2018, EOS experienced a failure, which just led to the fact that several validators began to process the same block. Which in turn led to the fact that the finalization of transactions was violated. The report on this is contained in various sources, but I will leave a link to this tweet, because it is not so easy to find it on the offsite of the initial developers. Now.

You can refer to the fact that it was only at launch, or you can recall that Qihoo360 warned the crypto community a week before that there were much more errors.

It is for this reason that over and over again there are conclusions that EOS is not a blockchain, but “a distributed homogeneous database management system, and the RAM market is a cloud computing service”. On the other hand, it is in this example that we see how primary centralization (the secondary was discussed in the first part) ultimately affects security.

In fact, the fact that social consensus has repeatedly become a more powerful tool than technical consensus has led the DPoS subfamily of Graphene (Bithsares/Steem/Golos/etc.) to decline.

No, there are many reasons, but we are interested in low-level patterns: Tron vs Steem (it), EOS – the example above, a long conversation about Golos.io – Golos.id – all this is precisely the confrontation between the basic technical and social consensus, where the latter prevailed. Constantly.

Therefore, the point of the story is not how to interpret the error and what conclusions to draw from it, but that this problem is of an architectural level and in the era of multichain it will develop. Which? Let’s talk about it below.

In the meantime, the first numbers.

Solana:

 

Ethereum

If compared by TPS, then Solana will be ahead by a clear margin. If we conduct a more complete research work, which we are doing, it turns out that it is much more important:

  1. Take into account not (only) the speed of transactions and not even the speed of finalization, but the mechanism itself: if there is too much influence on this process, the blockchain will be unstable.
  2. It is extremely important to see the transmission of a hypothetical TVL (Total Value Locked) per transaction: for more on this, see. in the next paragraph.
  3. Finally, it is important to evaluate the level/degree of decentralization: starting from supernodes (validators, delegates, etc.) and ending with HODL – wallet owners.

With regard to paragraph 2, it means the following:

  1. What is the TVL of the entire DeFi segment on a particular blockchain.
  2. What is the percentage of long HODL waves, that is, those who keep the native token (coin / coin) of this blockchain for three or more years.
  3. What is the median profitability of MEV bots and other participants that depend on them.
  4. What are the other financial parameters at the time of the transaction.

Again, all this may seem redundant, but it helps tremendously in practice.

Let me give you an example: if we compare the number of implemented BSC vs Ethereum smart contracts, then Ethereum will be the loser. Visually, statistically – whatever. But if you add economic parameters to them, it will come out exactly the opposite.. And this problem has been known for a long time: “cheap, or almost free transactions” in the architecture actually create two vulnerabilities at the architecture level:

  1. They demotivate supernode holders.
  2. They motivate the creators of scam tokens and similar products.

Conclusion No. 01. From technology to economy

We get the first significant conclusion: the era of the technological aspect of consensus is not over, but has passed the first mature stage, while the era of the economic aspect is just beginning.

What does it mean?

  • Firstly, due to the increasing interoperability, both at the L1 and L2/L3 levels, stability parameters will be prioritized in comparison with the formal indicators of blockchain/DAG solutions.
  • Secondly, sooner or later it will be necessary to create a derivatives market for the L1/L2 technological stack, because otherwise it is impossible to fully interact with different solutions.
  • Thirdly, the summation of the first and second thesis suggests that the activities of the owners of supernodes must have multi- and cross-chain tokenization, since otherwise it is impossible to pay for the positive effects necessary for the stable operation of the system.
  • Fourth, we get a simple formula: k = Stable_Work/Security, where Security is the security of L1/L2 systems, k is the coefficient of direct or inverse dependence, and Stable_Work is the stability of work (normal operation of systems).

In fact, k determines how the system works. Let’s say:

  • BSC (BNB chain) is close to the 3/1 formula, that is, stability exceeds security;
  • Bitcoin is about 1/1;
  • Solana – 1/3.

To calculate the parameters more accurately – each should be designated as the sum of other parameters in percent. Like that:

1. Let Average(Uptime) go through the gradation:

a. 100%-99% = 10%;

b. 98,9%-97% = 3%;

c. Less than 97% = 1%.

2. Next is the deviation from Average(Finalisation):

a. Less than 10% = 10%;

b. From 10% to 20% = 3%;

c. More than 20% = 1%.

3. And so on: there can be 3, 5, 10 or 100 parameters, depending on what accuracy you want to achieve.

The parameter weight will be calculated as follows: Parameter_Weight = 100%/Quantity(Parameters). In my case, there are 10 parameters, so the maximum percentage for each is 10%.

Then just add up: 10%+10%+…10% = 100% or less.

 

The table shows an example of calculation on test and incomplete parameters:

We do the same with security.. This includes the following indicators:

  1. The number of supernodes to update.
  2. The number of break-ins per year.
  3. Number of forks per year.
  4. Others.

Then we get, for example: 97/30, or 3.2(3). That is, the stability parameter prevails over security, that is, 1 will be the ideal ratio. Thus, two architecturally significant aspects can be assessed at once:

  1. Absolute percentage of security and stability.
  2. Their ratio.

You can complicate the formula and add decentralization, but I recommend doing this after an initial analysis of stability and security.

Why do it at all? Reasonable question. I’ll try to answer it in detail.

Conclusion No. 02. PoS family at the fork

We have already seen forks between the community (Hive vs Steem(it), ETH vs ETC, BCH vs BTC, etc), we have already seen technological forks (mostly soft, but there are also hard ones, when it smells like kerosene, like with Bitcoin in 2010), but now the era of financial and economic forks has come and they will affect too many areas of activity to lose sight of the process or not notice at all.

Let me explain.

If it seems that this is a theoretical study, then I hasten to upset: no, this is already practice. I will give an example from the report on the Cosmos ecosystem: due to the fact that the owners of supernodes evaluate the economic aspect before launching any new project, an artificial oligopoly is actually created, where, due to the lack of primary (basic) and public tokenization of consensus work at the multichain/crosschain level, it is artificial. Simply put: who is the first – that and slippers. But at the same time, slippers always go to the manufacturer of slippers, although it is logical that they should go to the buyers of slippers.

I know at least two dozen teams involved in validation and everything related to it. Feedback is about the same everywhere: if you are not the top, then you can break through either (1) in difficult times (like now), or (2) due to pure luck, which tends to an infinitesimal value due to the engagement of launches: projects want ” verified money” and verified people, validators – quick profit, as a result – the level of decentralization tends to zero, and with it the level of technological consensus, replaced by social one.

This has a downside: look at the controversy surrounding the unfreeze of ETH2, or the withdrawal of validators in Solana.

Both there and there we have the bottleneck – the bottleneck of primary distribution: the technological stack does not cover the risks of the social due to initial centralization, and the social one cannot solve the technological one due to the fact that teams solve problems as they arise, and not at the level basic architecture. (By the way, the renaming of ETH2 is another link in this process.)

What’s next?

How once PoW, and then mixed consensuses, and after them the PoS family (which can only be considered a consensus family in a broad sense) immediately hit the ceiling (vertical scaling), walls (horizontal scaling), and the floor (scaling by the level of basic elements of economy-architecture).

This is very easy to follow with examples that take the “best” from the world of PoW & PoS: Decred, Dash, etc.

And this does not mean that something irreparable has happened, but it definitely indicates that the era of new systems has come.. In part, the described problems will be solved with the help of the ZKP mechanic, in part – by detailing the elements of PoS architectures, but still part will remain with innovation.

And here I will try to share a number of considerations.

Firstly, all three parts of the PoS materials are devoted to a number of problems, but most of them can summarize one thing: the centralization of liquidity. Those systems that can solve the problem, in fact, will become advanced. The problem, in turn, has three elementary layers:

  1. Backend (this is just the consensus level and below).
  2. Frontend (it’s easy to understand from the example of Tornado Cash that a step in this direction has been made).
  3. Multi-chain/cross-chain interoperability.

So far, obviously no one is working with the latter. No, parachains, subchains, hubs, EVM compatibility and more – yes. But how does this solve compatibility issues at a basic, architectural level? No way.
You can look again at the example of bridges: yes, the generation of bridges that create wrapped assets is behind us, and we have the ability to complete transactions in the moment. Let’s say on allbridge using standard stablecoins and/or other coins/tokens, not derivatives. But how is this problem solved? In fact – through semi-automatic completion of transactions. The next step is to credit the native coin to the recipient wallet in the desired network.

What’s next?

Even in this extremely simple example, it turns out the following:

  1. There is a transaction in blockchain #01 (let’s say Ethereum).
  2. There is a transaction in blockchain #02 (let’s say Polygon).
  3. Both have supernodes, on which the transferred value depends at each step.
  4. But the final motivation in the two transactions is different:

a) starts transaction #01 bridge client;

b) continues in chain No. 01, and then No. 02 – the bridge itself;

c) and the final recipient for transaction #02 may be a completely different person and / or the same as the sender.

Doesn’t it remind you of anything?

If we replace the bridge with a set of atomic swaps, and the receiver/sender with supernodes, then we get that multi/cross-chain value transfer motivates anyone, but not the underlying L1 structures that earn on it (based on the internal evaluation of systems, that is, native coins), while everyone else may not earn at all.

Paradox?

I think so. But so far, the systems are already receiving more and more injections from neophytes, and therefore this issue is either not considered at all, or is considered at the level of customized solutions (again, they have gone farthest in the EVM subfamily).

Actually, this is the solution to the problems that Vitalik Buterin identified regarding the cross-chain as such.

Another thing is that not all technologies have yet reached their peak in order to take a swing at Jomalungma, and therefore there is time for us, Web 3.0 researchers, netstalkers and IT entrepreneurs, to close this black hole with the creativity of new approaches.