Reduce Validator Set to 120

This proposal asks that the Osmosis Validator active set be reduced to 120 from 150 to enhance network performance and validator sustainability while maintaining decentralization of stake.

Osmosis currently has a very long tail of validators, with the minimum stake needed to enter the active set being only 30k OSMO. This proposal would remove the bottom 2% of voting power and cause the new entry point to be 335k OSMO delegated.

There are two main concerns about the current size of the validator set:

Performance

A decentralized blockchain using the CometBFT consensus requires 67% voting power to approve each block or software upgrade and 33% to veto any changes.

Even with this reduction, the number of validators making up the 33% that can veto a block (Nakomoto Coefficient) remains at 10, maintaining Osmosis’s position as one of the most decentralized Cosmos chains.

The number of validators required to approve a block decreases by one, showing that these tail validators mostly add additional peering steps for consensus. Reducing the active set could improve block processing efficiency by reducing unnecessary consensus steps, while still maintaining a high degree of decentralization.

Validator Sustainability

Validators in the middle of the set have reported that operating on Osmosis is financially unsustainable, as they receive insufficient rewards to cover costs. This proposal ensures that active validators are more adequately compensated, reducing reliance on inflation-based rewards.

In the long term, all validators should be profitable to run. As Osmosis governance makes further changes to tokenomics to lower inflation, this issue is often raised as a reason to retain inflation. Eventually, validators should receive sufficient rewards from non-inflationary or net-neutral inflationary sources to perform their roles.

One way to boost this is to reduce the active set over time to ensure that all validators receive more appropriate rewards for their work and scale the validator set size based on the number of validators that Osmosis can afford to maintain rather than the previous expansion proposals which aimed to reduce the competition for remaining in the active set. As there is no competition to remain in the active set now, the inverse of the previous increase proposal in October 2022 should be true, where validators should be attempting to differentiate themselves to receive delegations rather than exist in a competition-free environment.

Further data available from SmartStake

Target Onchain Date: 9th February 2025

3 Likes

Hugely in favor of this (and of potentially going even farther than 120). Validator sets in general have gotten too large and are mostly reminiscent of a time in which sets were very competitive. The dynamic has changed and the additional time to consensus isn’t worth the small amount of stake that would be excluded by this proposal (if it ever was).

Small note. Stride currently delegates to (almost) the entire set per the copy staking proposal that passed on Osmosis a while back. Were this proposal to pass, Stride will adjust the copystaking set accordingly to delegate amongst only the top 120 validators (subject to all of the other rules outlined in the copystaking proposal).

4 Likes

This is a very tough subject in general; whether it concerns increasing or decreasing the validator set.

Often increasing the set is done under the pretense of decentralisation, but time and time again has shown that this is a false narrative and has never ever led to a more decentalized chain.

We have also seen that the tail validators often (not all, please take my words here with a small grain of salt) run at inferior hardware to come close to running break-even. This has an even stronger effect on the block times which can be achieved for Osmosis as a chain, since these tail validators are adding AND to the number of validators where communication towards is needed AND the response time of these respective validator nodes.

With the development of Osmosis to shorter blocktimes and a desire to become even faster there is an absolute need to be selective at what we do and how we do it. Challenging the status quo is a very good thing.

I will be supportive of this reduction. Not because I am in the safe zone (for now), but because it is better for Osmosis as a whole. Especially when Polaris will be released and the DEX needs to give a CEX-like feeling it is an absolute must to get to the feeling of near-instant finality, where reducing the set is a small cog in the change needed to pull this of.

4 Likes

Generally against reducing Validator set sizes as they don’t really do anything materially beneficial economically for chain or validators.

However, I do understand the technical aspect of achieving faster block times, if desired. I also understand decentralization is a LARP.

I understand if you’re trying to get sub 1 second block times and need to have smaller validator set for that. (Getting ~1.5 seconds at 150 validators)

But the economics reasons seems nonsensical.

1 Like

The economic impact of this cut will indeed likely be small since it is only impacting a small percentage of voting power. However, this is a trial cut to see the effect. Happy to push further faster if the consensus moves that way.

If we can get to the point where we have a similar Nakamoto coefficient with a set around half the size, blocks move faster, and every validator is profitable on protocol emissions alone, then we don’t need inflationary staking rewards. Attempting to cut those previously was met with concerns about validator sustainability.

I’d love us to get to a fully sustainable model and heavily cut/turn off inflation in the next few months, but sudden large changes to the set seem excessively risky, so working on a pathway to that destination.

2 Likes

I fully understand the faster block times and generally support that goal. Not against the cut to 120, or even 100. Do feel bad for the folks down there hoping to gain stake and build a brand though.

Would just focus on tech aspect/block time goals than saying it is in the interest of “decentralization” or “economic benefit”.

Will be interesting to see how it plays out!

1 Like

While the goal for faster block times is a valid argument here, let’s unpack some of the risks we’re brushing off:

  1. Peering. Yes, the tail increases the peering steps. But this is a crucial part of decentralization. If the validator set continues to consolidate around centralized geographic locations, it further increases the barrier to entry for future geo distribution because the peering performance becomes heavily centralized around the common geo areas. It is already hard for validators who aren’t in the same location as the popular data centers to join a set. They usually can’t sync fast enough due to peer latency until they can get higher into the set (a catch-22). Once again, a system designed to reward the incumbents.

  2. Nakamoto coefficient… the goal of maintaining this number is mentioned, but nothing about how this change will accommodate that goal. What are people more likely to do if the vals they support at the tail get bumped? The most value-aligned ones may consider going to another tail validator that survived the slash… but let’s be realistic. How many times can you shrink the set, bump off tail vals, and keep expecting people to manually keep up with the chaos that is the tail? The more we keep playing these games with the very people who are committed to supporting decentralization and smaller vals, the more they will burn out from getting punished, slashed, and losing out on staking rewards while they juggle around to whoever can keep up with the ever-changing val set. This pushes ppl to pick ‘safer’ bets towards the top. What can be done to help encourage a more stable, distributed set instead of constantly punishing those that are trying the hardest to keep decentralization from just being a larp?

  3. Validator profitability is mentioned as a key driver for the issues w/ tail validators. Shrinking the set to address this is like issuing out free money to boost the economy. We all know it just dilutes everyone at the bottom, and drives all the value and wealth back up to the top. If you want to address the systemic causes for these issues we have to look at more ways to deal with the massive over-allocations of the validators at the top and get that distributing down over the entire set more evenly.

I’m not opposed to a smaller set, because I agree that having a bunch of poor performing, unprofitable validators at the bottom is not helping the chain.

But the question really should be addressing the same problem we’ve been needing to address from the beginning: how to incentivize better stake distribution. Playing with the set size to address these issues is like fishing w/ dynamite. It’ll get some fish on the table to eat, but it’s not really helping a sustainable flow of food.

Just some of my opinions and concerns, not a hard for or against. I’d like to see more ideation around actually addressing the systemic problem of encouraging a more distributed validator set instead of nothing but punitive actions against those sacrificing the most to contribute.

3 Likes

I’d love us to get to a fully sustainable model and heavily cut/turn off inflation in the next few months, but sudden large changes to the set seem excessively risky, so working on a pathway to that destination.

It seems this is the real motivation here, not the block time or other issues mentioned.

I’d again caution against making arbitrary changes to the validator set to address issues that are only indirectly affected by it. Perhaps simply reducing the val set size is a silver bullet here, but my experience being involved with a validator that has battled it’s way up through many many active set ‘tails’… reducing the val set just makes it even more impossible for small, hard working, mission-driven validators that are committed to the chain itself to succeed without having centralized backing to float them.

Either we embrace that the active set is going to be ‘corporate validators’ who have centralized funding and control, and the independent little guy is no longer a priority, or we look for other systemic changes that encourage a more distributed set that welcomes smaller, more mission-aligned validators to be successful.

The long tail of validators on a chain is sort of the current ‘proving ground’ for new players to come in at an affordable level, sacrifice their time and profitability to grow their community support, and slowly rise up into the ranks. Without this proving ground, there will be no new entrants of this type. Only larger, more centralized, externally funded ones that are more profit driven than value driven.

Is that the active set we want? How can we accomplish both goals here, without increasing the centralization risks?

2 Likes

First and foremost - I work with a validator that would be impacted by this - however these thoughts and concerns are my own.

The delegation market should operate as a free market - validators can determine for themselves if it’s going to be profitable to operate in the active set and they have the ability to adjust their commission also to determine their own profitability. I don’t really see profitability as an issue of the larger validator set.

Increasing centralization to achieve shorter blocks might make sense if it’s mission critical - but its hard to understand unless we can quantify the impact.

Hypothetically, what if one of the top vote power validators is underperforming leading to longer blocks? Does that erode any benefit we’d see?

Am I also wrong in thinking we can come to consensus entirely without the last 30 validators because the vote power is so small - so they wouldn’t impact block times? Or is this mistaken thinking?

There’s a lot of concentration amongst the top validators and a very long low vote power tail - I do think efforts might be better spent directing delegations towards capable validators in the tail to promote decentralization rather than cutting the tail entirely.

I’m all for cutting inflation if that’s the desire of OSMO holders - but I don’t see it of any relevance to reducing the validator set.

2 Likes

Why do you think that distributing the rewards from the bottom 30 validators will make those above them more economically viable?

All you’re doing with this proposal is killing 30 teams that have put time and money into being in the validator set. They do no harm, but you will do harm to them by excluding them from your club.

3 Likes

This comes from the validator at the bottom of the set who is attempting to trick users into staking with them by renaming to Polaris on Osmosis and Skip on the Cosmos Hub.

If you need to do this with 100% commission then it really shows how little lower ranked validators make and how they are often unable to differentiate themselves by other means.

3 Likes

Chill Validation disagrees with reducing the set due to economics.

There is enough chain inflation to ensure all validators can run optimally, but the distribution gives too much to some and not enough to others.

It’s unlikely that the top validator at 7.03% works 700x harder than the lower validator at 0.01%. Osmosis should be the chain that innovates out of this imbalance.

We would be more interested in seeing a more novel inflation distribution that reduces excess inflation to the top and provides better distribution to the bottom. We are not asking for equal distribution, but we advocate that chain design is smarter to overcome the bias of delegators who do not understand the value of a balanced set.

We also agree that “trickster” validators should not be rewarded. Smart rewards should exclude malicious actors.

With improvements coming to p2p in cosmos-sdk we also see that p2p should improve with technology rather than cutting validators who have equally invested into the chain.

3 Likes

Of course, this proposal will put a significant number of validator teams at a disadvantage—teams that, unlike the larger ones, are actively contributing to the network’s growth. Meanwhile, the larger validators will naturally support this proposal, as they stand to gain even more delegations and, consequently, higher commissions. This is pure centralization.

There is no strong justification for reducing block time. Osmosis primarily functions as a DEX, and the current block time is satisfactory for everyone.

The proposal itself is illogical—first expanding the validator set, only to reduce it again, and by as many as 30 spots at once.

It is evident that the proposal will pass because it benefits validators who are not at risk of being removed. However, I urge large validators to carefully consider the implications of this proposal. We are committed to the network’s growth and contribute technically with good uptime, timely updates, and active participation—unlike some larger validators who merely collect commissions, ultimately devaluing OSMO.

I suggest focusing more on those who generate real value for the project and ensuring a fairer distribution of delegations among these validators.

3 Likes

We are voting against this proposal as it does not bring any real economic benefit to the network. Some validators who contribute to the network’s development are ranked below 120, and it is simply unfair to deprive them of the opportunity to validate the network. Moreover, this is a step toward centralization and a limitation on the network’s growth potential.

3 Likes

As an independent validator which would be affected by this proposal, Oldcat validator disagree with this proposal.

While faster transactions are welcome, they shouldn’t be prioritized at the expense of our community and the valuable contributions of smaller, actively engaged validators. Reducing the validator set directly harms these dedicated participants who have invested significant time and resources in Osmosis. It unfairly risks pushing them out, diminishing their contributions and discouraging future involvement.

The idea of enhancing some validators’ benefit by just cutting the number of validators is weird thought this is not new to me. While making sure validators can operate sustainably is important, simply removing some may not be helpful but just lead to a less even distribution of rewards and make it harder for new validators to join and grow. A group of larger validators could become entrenched, regardless of their performance. I like Osmosis and I don’t want it go that way. The validator market is essentially free – validators can choose to leave if they don’t find it worthwhile. It’s preferable to maintain the current structure and allow the market to self-regulate.

Having a Nakamoto Coefficients of 10 and is the highest in Cosmos ecosystem does not mean we are good. It may simply indicate the Cosmos ecosystem as a whole needs to strive for better decentralization. Osmosis is the most important chain and hope of Cosmos ecosystem. We need to consider the impact of this harmful proposal.

3 Likes

As said, this is a very very interesting conversation where there are 2 sides of the medal who are both right.

It is also safe to conclude that the earlier expansions of the set (which happened largely under the pretense of decentralisation) was a mistake to begin with. That is the main reason I support this proposal, since we need to go back to the basics.

The reduction should never ever be mentioned as the solution for decentralisation. It is a technical thingy, where it is needed to determine how much effect it really had down the line in terms of block production and more.

However, the distribution of voting power is and remains a subject which is highly contested. If you can remember there are proposals put up since the early days of Osmosis to have some sort of differentiation in terms of rewards APR when you are high or low in the validator set, but also the choice which has been made by Stride sadly to stop a committee which focussed on activity for a project and go for copy-staking instead. Note that this has all been supported by both the Osmosis and Stride community, meaning that the majority of people actually really don’t care about decentralisation. Not of voting power, not of validators, not of geographics, not of self-operated vs hosted. That is a reality we have to deal with and come up with new innovative solutions.

And I really really hope we get to the point where we will tell to each other we need to expand the set again, because our model allows it. At this point in time it simply doesn’t, but we need to do better.

2 Likes

We are against reducing the number of slots in the active set. The arguments for this, given in the proposal, do not seem convincing and objective. It is not obvious that this will help achieve the stated goals, but it is obvious that it will hit the teams of small validators, who support and develop the network on enthusiasm alone and often at a financial loss. This is the path to centralization, small teams without the support of centralized groups and funds will no longer be able to participate in the development of the project. We do not need new CEX, we are for independent and strong DEX.

CosmoNibble

1 Like

You’ve fucked us over repeatedly with you half-baked ideas. eg. Dropping incentives from OSMO-USDC without giving liquidity providers enough time to unbond.

Did you even calculate how much the bottom 30 earn at 5%? It’s minuscule and will not make anyone in the top 120 suddenly profitable.

The improved networking performance with fewer validators isn’t really an issue, obviously, since the chain has been just fine for ages, so your specious presentation it totally disingenuous.

The view must be great from your ivory tower.

1 Like

Hey everyone,

I want to share some thoughts on the current proposal to reduce the active validator set on Osmosis. While the argument is that fewer validators will boost performance and sustainability, there are several critical points we need to consider.

Right now, it’s not just the bottom 30 validators that are struggling—almost 100 validators are barely earning enough. For example, validators around the 50th position have about 2.1M OSMO staked and are earning roughly $500 USD a month. When you compare that with networks where around 50 validators are sustaining themselves profitably, it’s clear that this is the kind of model we should be aiming for.

Moreover, is it really the network’s responsibility to cover validator expenses? Ultimately, these costs should be managed by the validators themselves. GATA HUB has already been funding hundreds of validator transactions out of our own pocket over the past months and years—yet, where has the foundation’s support been for this? We haven’t seen any meaningful backing.

If profitability isn’t achieved and validators continue to operate at a loss, why should we remain on a network that doesn’t prioritize our sustainability? Should the foundation be concerned with the welfare of validators if they’re not willing to make the necessary changes to ensure that the remaining 120 validators are financially viable?

Ultimately, if the network doesn’t support sustainable validator operations, we might have no choice but to exit. This is a critical issue that demands a serious answer—will the foundation ensure that the reduced set of 120 validators will actually be sustainable?

Of course - NO!

For what do we need to kick out of set so many cool small validators?
This is harmful proposal!

Sorry, that we didn’t write it before the starting of voting, but we will always support decentralization and small validators!

We need more validators, not less!

5 Likes