Reduce Validator Set to 120

This proposal asks that the Osmosis Validator active set be reduced to 120 from 150 to enhance network performance and validator sustainability while maintaining decentralization of stake.

Osmosis currently has a very long tail of validators, with the minimum stake needed to enter the active set being only 30k OSMO. This proposal would remove the bottom 2% of voting power and cause the new entry point to be 335k OSMO delegated.

There are two main concerns about the current size of the validator set:

Performance

A decentralized blockchain using the CometBFT consensus requires 67% voting power to approve each block or software upgrade and 33% to veto any changes.

Even with this reduction, the number of validators making up the 33% that can veto a block (Nakomoto Coefficient) remains at 10, maintaining Osmosisā€™s position as one of the most decentralized Cosmos chains.

The number of validators required to approve a block decreases by one, showing that these tail validators mostly add additional peering steps for consensus. Reducing the active set could improve block processing efficiency by reducing unnecessary consensus steps, while still maintaining a high degree of decentralization.

Validator Sustainability

Validators in the middle of the set have reported that operating on Osmosis is financially unsustainable, as they receive insufficient rewards to cover costs. This proposal ensures that active validators are more adequately compensated, reducing reliance on inflation-based rewards.

In the long term, all validators should be profitable to run. As Osmosis governance makes further changes to tokenomics to lower inflation, this issue is often raised as a reason to retain inflation. Eventually, validators should receive sufficient rewards from non-inflationary or net-neutral inflationary sources to perform their roles.

One way to boost this is to reduce the active set over time to ensure that all validators receive more appropriate rewards for their work and scale the validator set size based on the number of validators that Osmosis can afford to maintain rather than the previous expansion proposals which aimed to reduce the competition for remaining in the active set. As there is no competition to remain in the active set now, the inverse of the previous increase proposal in October 2022 should be true, where validators should be attempting to differentiate themselves to receive delegations rather than exist in a competition-free environment.

Further data available from SmartStake

Target Onchain Date: 9th February 2025

2 Likes

Hugely in favor of this (and of potentially going even farther than 120). Validator sets in general have gotten too large and are mostly reminiscent of a time in which sets were very competitive. The dynamic has changed and the additional time to consensus isnā€™t worth the small amount of stake that would be excluded by this proposal (if it ever was).

Small note. Stride currently delegates to (almost) the entire set per the copy staking proposal that passed on Osmosis a while back. Were this proposal to pass, Stride will adjust the copystaking set accordingly to delegate amongst only the top 120 validators (subject to all of the other rules outlined in the copystaking proposal).

2 Likes

This is a very tough subject in general; whether it concerns increasing or decreasing the validator set.

Often increasing the set is done under the pretense of decentralisation, but time and time again has shown that this is a false narrative and has never ever led to a more decentalized chain.

We have also seen that the tail validators often (not all, please take my words here with a small grain of salt) run at inferior hardware to come close to running break-even. This has an even stronger effect on the block times which can be achieved for Osmosis as a chain, since these tail validators are adding AND to the number of validators where communication towards is needed AND the response time of these respective validator nodes.

With the development of Osmosis to shorter blocktimes and a desire to become even faster there is an absolute need to be selective at what we do and how we do it. Challenging the status quo is a very good thing.

I will be supportive of this reduction. Not because I am in the safe zone (for now), but because it is better for Osmosis as a whole. Especially when Polaris will be released and the DEX needs to give a CEX-like feeling it is an absolute must to get to the feeling of near-instant finality, where reducing the set is a small cog in the change needed to pull this of.

2 Likes

Generally against reducing Validator set sizes as they donā€™t really do anything materially beneficial economically for chain or validators.

However, I do understand the technical aspect of achieving faster block times, if desired. I also understand decentralization is a LARP.

I understand if youā€™re trying to get sub 1 second block times and need to have smaller validator set for that. (Getting ~1.5 seconds at 150 validators)

But the economics reasons seems nonsensical.

The economic impact of this cut will indeed likely be small since it is only impacting a small percentage of voting power. However, this is a trial cut to see the effect. Happy to push further faster if the consensus moves that way.

If we can get to the point where we have a similar Nakamoto coefficient with a set around half the size, blocks move faster, and every validator is profitable on protocol emissions alone, then we donā€™t need inflationary staking rewards. Attempting to cut those previously was met with concerns about validator sustainability.

Iā€™d love us to get to a fully sustainable model and heavily cut/turn off inflation in the next few months, but sudden large changes to the set seem excessively risky, so working on a pathway to that destination.

I fully understand the faster block times and generally support that goal. Not against the cut to 120, or even 100. Do feel bad for the folks down there hoping to gain stake and build a brand though.

Would just focus on tech aspect/block time goals than saying it is in the interest of ā€œdecentralizationā€ or ā€œeconomic benefitā€.

Will be interesting to see how it plays out!

While the goal for faster block times is a valid argument here, letā€™s unpack some of the risks weā€™re brushing off:

  1. Peering. Yes, the tail increases the peering steps. But this is a crucial part of decentralization. If the validator set continues to consolidate around centralized geographic locations, it further increases the barrier to entry for future geo distribution because the peering performance becomes heavily centralized around the common geo areas. It is already hard for validators who arenā€™t in the same location as the popular data centers to join a set. They usually canā€™t sync fast enough due to peer latency until they can get higher into the set (a catch-22). Once again, a system designed to reward the incumbents.

  2. Nakamoto coefficientā€¦ the goal of maintaining this number is mentioned, but nothing about how this change will accommodate that goal. What are people more likely to do if the vals they support at the tail get bumped? The most value-aligned ones may consider going to another tail validator that survived the slashā€¦ but letā€™s be realistic. How many times can you shrink the set, bump off tail vals, and keep expecting people to manually keep up with the chaos that is the tail? The more we keep playing these games with the very people who are committed to supporting decentralization and smaller vals, the more they will burn out from getting punished, slashed, and losing out on staking rewards while they juggle around to whoever can keep up with the ever-changing val set. This pushes ppl to pick ā€˜saferā€™ bets towards the top. What can be done to help encourage a more stable, distributed set instead of constantly punishing those that are trying the hardest to keep decentralization from just being a larp?

  3. Validator profitability is mentioned as a key driver for the issues w/ tail validators. Shrinking the set to address this is like issuing out free money to boost the economy. We all know it just dilutes everyone at the bottom, and drives all the value and wealth back up to the top. If you want to address the systemic causes for these issues we have to look at more ways to deal with the massive over-allocations of the validators at the top and get that distributing down over the entire set more evenly.

Iā€™m not opposed to a smaller set, because I agree that having a bunch of poor performing, unprofitable validators at the bottom is not helping the chain.

But the question really should be addressing the same problem weā€™ve been needing to address from the beginning: how to incentivize better stake distribution. Playing with the set size to address these issues is like fishing w/ dynamite. Itā€™ll get some fish on the table to eat, but itā€™s not really helping a sustainable flow of food.

Just some of my opinions and concerns, not a hard for or against. Iā€™d like to see more ideation around actually addressing the systemic problem of encouraging a more distributed validator set instead of nothing but punitive actions against those sacrificing the most to contribute.

Iā€™d love us to get to a fully sustainable model and heavily cut/turn off inflation in the next few months, but sudden large changes to the set seem excessively risky, so working on a pathway to that destination.

It seems this is the real motivation here, not the block time or other issues mentioned.

Iā€™d again caution against making arbitrary changes to the validator set to address issues that are only indirectly affected by it. Perhaps simply reducing the val set size is a silver bullet here, but my experience being involved with a validator that has battled itā€™s way up through many many active set ā€˜tailsā€™ā€¦ reducing the val set just makes it even more impossible for small, hard working, mission-driven validators that are committed to the chain itself to succeed without having centralized backing to float them.

Either we embrace that the active set is going to be ā€˜corporate validatorsā€™ who have centralized funding and control, and the independent little guy is no longer a priority, or we look for other systemic changes that encourage a more distributed set that welcomes smaller, more mission-aligned validators to be successful.

The long tail of validators on a chain is sort of the current ā€˜proving groundā€™ for new players to come in at an affordable level, sacrifice their time and profitability to grow their community support, and slowly rise up into the ranks. Without this proving ground, there will be no new entrants of this type. Only larger, more centralized, externally funded ones that are more profit driven than value driven.

Is that the active set we want? How can we accomplish both goals here, without increasing the centralization risks?