1kx Tokenomics Proposal

Hey @mike_1kx,

Thank you for the proposal. I think it is important that we explore all viewpoints and I appreciate you raising the centralisation concern. Ultimately this is what makes Rocket Pool stronger, by having the hard conversations.

I am going to focus on the 1kx proposal rather than the criticism raised against the rework proposal.

I will evaluate based on product goals:

  • RPL Value
  • Competition
  • TVL Growth
  • Principles

RPL Value

This proposal retains RPL’s current value proposition and augments it. To grow as a protocol, RPL would need to be staked linearly with TVL. By decoupling node operators, ETH providers, and RPL delegators, it increases the supply side market. It also augments RPL value by using the delegate share to provide an ETH return for RPL delegators and adds buy pressure from the protocol.

I do have some concern that the protocol will end up buying a lot of its own token due to under collaterised nodes. If RPL earns decent yield within the protocol without running a node we may exacerbate the liquidity issues that currently exist.

I am also not sure how RPL works as a governance token with delegation by default.

Competition

With this proposal, a node operator would not have to stake RPL for their first n validators but would subsequently require 10% of borrowed ETH collateral in RPL to launch validators either through providing it themselves or seeking delegation.

So for node operators with < n validators, we would be offering ETH-only node staking but larger operators would still have the RPL requirement. I believe that seeking RPL delegation would add considerable friction that Lido does not have.

I like the fact we are prioritising small node operators but I do worry that a considerable amount of our TVL comes from individual node operators that have a large amount of ETH. This maybe a consequence of trade off, but we would be uncompetitive in this space; these node operators would favour Lido and contribute considerably to increasing their TVL. I guess it would depend on the value of n.

As you mentioned, the free-collateral mechanism is gamable so it would depend on the trade off between holding 10% borrowed ETH collateral in RPL and the bond curve returns. It would be good to see analysis on this. My intuition says that a significant number of node operators would rather game the system and have marginally less yield.

This proposal allows node operators to set their own commission rate. So in theory they could set the commission rate higher than they would receive from Lido. In practice they will have to compete with other node operators to attract RPL delegation. The design intentionally minimises node commission, which is great for rETH but means our node commission may be noncompetitive compared to Lido. The worst case is that we end up in a race to the bottom and we price out node operator supply, particularly individual home stakers. I like the idea that market dynamics can find an equilibrium but practically we have to compete on both sides of our market. I tend to agree that a lower commission rate favours centralised operators with a low cost base. This could be solved by using minimum commission rates but you weaken the RPL delegation mechanism somewhat. For the free-collateral node operators, it looks like the recollaterisation amount comes out of their node commission, is that right?

I agreed with your previous proposal that rETH yield has not been discussed enough and should form part of the future vision.

TVL Growth

As you said, our TVL is currently constrained because we have to find enough technical node operators that have ETH and will hold RPL. This proposal breaks that link and means those providing ETH and RPL could be separate parties.

Currently supplying RPL to a node is not trustless but there are potential solutions on the horizon that help in that regard.

The proposal decouples the parties providing each of the collateral but it doesn’t decouple RPL from the protocol (by design). In the proposal, TVL would be defined (and limited by) the game theory between node operators and RPL delegators. Node operators may have more ETH to stake but unable to attract delegation, causing a TVL stall.

Although I would love to think that everyone that holds RPL is aligned, RPL can be bought on market and used as mercenary capital especially if it is attracting a good yield. Mercenary capital is a flight risk that can make our ability to scale less predictable.

Fluctuations in RPl price will continue to affect the protocol TVL adversely because of the 10% borrowed ETH collateral ratio. Low RPL prices push many people under the 10% ratio so they cannot deploy more ETH, unless they can attract RPL delegation, driving down commissions and potentially losing node operators.

Principles

When we first designed Rocket Pool, we considered delegated staking (on ETH side). The issue with delegation is that it ends up relying on identity. We have seen in our own delegate voting system that delegations are attracted to known entities only - this is even more true when the decision is financial (capital at risk or yield on the table). We chose to distribute stake across a network to avoid centralising stake on known entities. This is extremely important to ensure the protocol is permissionless and open.

Delegated RPL stake would end up centralising stake on known / probably doxed entities and would disadvantage small anonymous node operators. As others have said, even if the pDAO was the delegate of last resort we would not be able to tell the difference between a small operator and a large operator (spread across multiple nodes).

I do think that delegation will help us scale but I don’t believe it should be at the base layer. I think it is better that the base layer stay as permissionless as possible and delegation is achieved higher in the stack (Nodeset, RocketLend, etc).

By retaining the RPL collateral requirement the proposal does provide a disincentive for centralised entities to capture the Rocket Pool validator set. I wouldn’t say that it is a robust mechanism though, because it would still be very possible. In fact, if you do not decouple RPL as a collateral it ends up being worse. A centralised entity could capture the validator set and by default governance.

Conclusion

Thank you for proposing a solution that tries to retain RPL as a collateral. I do think it is a good exercise. I have also tried to see how we can maintain that consistency but, if I am honest, I kept hitting the same brick walls. This proposal is a well thought out solution but frankly the core delegation system I don’t see helping us compete or stay permissionless. I would be happy to hear your feedback and correct me.

It is my opinion that RPL as a collateral, does not allow Rocket Pool to compete or to scale. Removing RPL as a collateral is a calculated risk - I am under no illusions. Rocket Pool will likely attract very large node operators, some of them may be centralised entities. What is important is that we lower the barrier to entry for individual operators and remain permissionless. By doing this we attract the long tail of node operators that will ensure Ethereum is permissionless, open, and credibly neutral.

16 Likes

Hi Knoshua, thank you for detailed response - I appreciate the time you spent reviewing the proposal and compiling such thoughtful questions.

The 1kx ideas seem to be in a very early stage and as presented here don’t amount to a coherent proposal that can be considered for vote.

Bottom line is that there is no 1kx proposal, just some loose ideas with giant holes and there seems to be no interest to do anything productive

Our goal with our proposal is to provide an alternative choice to those who are not in favour of the rework. As per LongForWisdom’s comment:

We can make sure that the governance route we take makes space for 1kx’s proposals as an alternative or addition. ie, when we do a forum vote, we can add a poll on 1kx’s proposal + ideas, to find out if there’s desire to explore them further.

We need to ensure the community has enough information to make a choice between the proposals, but it is not necessary for every protocol parameter be fully defined in order to make a comparison. Just as we can discuss whether UARS is a good idea without knowing whether no_share is x% or y%, we can assess the components of our proposal without having every protocol parameter defined.

Some of your questions will require significant time investment to answer in full detail. We are happy to commit resources to building models and performing analysis but we first want to know if there is community support for the idea. If 90% of the community is happy with the rework direction, the in-depth analyses to define our protocol parameters are not required. The first step is for the community to select a direction. Once a direction is selected, protocol parameters can be finalised.

And speaking of timelines: we are making a decision that will affect Rocket Pool for years to come. Let’s not delay unnecessarily, but let’s also not be hasty. There is a lot of value at stake here, and rushing would be reckless.

We hope that, if the community expresses a preference for our direction, the rework proposal authors will be willing to work constructively with us to bring this to market as soon as possible. They have a lot of valuable knowledge about the protocol, and the final outcome would be stronger with their input, so it would benefit the entire community for us to work together.

We have acknowledged that our proposal is in an earlier stage, and we would need some time to finalise the protocol parameters in collaboration with the community, but there is no reason this should lead to significant delays. If our proposal is accepted, 1kx can dedicate resources (dev, data analysts, etc.) to ensure the proposal is ready for implementation within similar timelines. Many of the components (megapools, bond curves) as the same as in the rework albeit with different parameters, so implementation time for many subcomponents will be identical.

As Val mentioned there is certainly room to speed up the discussion process. We would propose weekly/biweekly community calls to facilitate efficient discussion of the protocol parameters and ensure we are ready for launch as soon as possible.

I would appreciate someone from 1kx answering these questions I had so far.



The difference in your communication style between Discourse and Discord is so striking that I had to double-check the username to ensure it was the same person.

Accusing me of “playing stupid games” when I am actually posing thought experiments to highlight flaws in your proposal is dismissive and unproductive.

LFW has lamented, more than once the lack of response to the proposal authors’ solicitations for feedback. When those who provide feedback are met with unprofessional and personal comments, the lack of response should come as no surprise. I hope you will consider how your attitude may be contributing to this lack of response and whether this lack of feedback helps or hinders Rocket Pool.

I hope the answers below help address some of the perceived “giant holes” in our proposal, and show the rest of the community that we are indeed trying to do something productive here.

Under what conditions is delegating to a node allowed?

The node must be under the max_collateral_ratio in order to receive delegation.

Under what conditions is undelegating from a node allowed?

There are no restrictions on undelegating. There are arguments to be made for limiting restrictions to smooth out changes, but this increases friction for stakers.
Undelegation takes affect at the end of the next rewards period (or on some other cadence, but after any RPL penalties have been applied).

How does delegated RPL interact with RPL the operator stakes themselves?

A NO who stakes on themselves is effectively delegating to themselves. All RPL delegated to a node is in a single pot, if slashed all delegators (inc. NO) are slashed proportionally to their share of the pot.

How is “a share of validator rewards” calculated in case of: multiple people delegating RPL, set of people delegating changing over time, when operator uses the smoothing pool, when “delegate of last resort” is active?

  • Multiple people: delegates earn rewards (or get slashed) proportionally according to their amount of delegated RPL compared to the total RPL delegated to the NO.
  • Changing delegates: delegation changes take effect at the end of a rewards period. At the end of each period, delegates who were active for the entirety of that period are rewarded. i.e. if you join midway through a period, you begin accruing rewards at the beginning of the next period.
  • Smoothing pool: we propose requiring all NOs using delegation to join the smoothing pool.
  • Delegate of last resort: incoming rewards are reduced by recollateralization_share. Remaining rewards are then divided as normal.

What happens to existing node operators when max_commission_rate is changed?

To simplify rewards calculations, max_commission_rate changes take effect at the beginning of the next rewards period.

If max_commission_rate is increased, no_share does not change, so the benefit goes to delegate_share/extra_reth_share. NO can decide whether to increase no_share.

If max_commission_rate is decreased, no_share is reduced proportionally, such that the no_share/delegate_share/extra_reth_share ratios remain unchanged.

Can the operator change no_share? Under what conditions and how?

Yes, by sending a tx from the node wallet via the smartnode interface.
Conditions would be similar to changing voter_share - the rate can only be changed one per rewards period, and the size of the change is limited to a percentage of the current value. This gives predictability to delegators, as NOs cannot rapidly and repeatedly change their no_share.

What value is proposed for recollateralization_share? How does it interact with the other shares when active?

We can provide a model for this show the effect of different values at different projected rewards levels. The value should be higher than the prevailing average delegate_share, as recollateralization should be more expensive than delegation.

In terms of interaction, it is also similar to UARS. If recollateralization_share is active, that share of the rewards is subtracted from the total rewards. The remaining rewards are then split according to the defined shares.

What does “strategically allocated between delegate_share and extra_reth_share” mean? Is there a setting? What is the proposed value?

It means that it is set according to RP’s strategic goals. If delegation is not attractive enough the ratio can be changed to send more value to delegates. If we feel delegates are being paid too much we can send more value to extra_reth_share. It is analogous to how UARS allows changing voter_share/surplus_share to incentivise different behaviours.
Yes, this would be a protocol parameter. Proposed value is open to discussion - the illustrations show 50:50. As above, we can provide a model for this to show indicative values.

How are you addressing system brittleness? How does profitability of a megapool compare to a solo staker assuming 3% solo APY and 1.5% RPL value loss per year due to supply inflation?

As with the rework, the answer here depends on a number of protocol parameters and market conditions (amount of staked/delegated RPL, prevailing delegate_share or surplus_share values, etc.). Given that the value stream is the same in both cases, and the key difference is in the distributions, you can actually assume similar profitability to UARS in comparable scenarios. For example, assuming our max_commission_rate is equal to the rework’s reth_commission, a NO with maximum no_share who has self-delegated maximum RPL will earn equivalent rewards to a NO who is earning maximum voter_share (assuming surplus_share=0).

A NO who has relied on delegates for their RPL would have a lower no_share. Their rewards would be similar to a NO under UARS who has not staked any RPL and is therefore getting zero voter_share.

Given the complexity, it might make sense to work on a shared model which allows comparison of both proposals based on the various parameters. Happy to connect you with one of the 1kx data analysts if you would like to contribute here. We have already created a model for comparing changes in staked RPL under each proposal which is linked in the steel man doc.

Even though the value stream is the same we believe this offers a better deal to all participants, because the rewards are shared between the NO, their delegates, and rETH, without the voter_share value extraction.

The “10% borrowed-ETH minimum” is something that would be enforced by the protocol over time, assuming a) the collateralization rate is set to 10%, and b) the protocol continues to earn fees.

With regards to the second component:

The market has no way to signal that they’re interested in Rocket Pool at a minimum RPL stake set at 8% borrowed ETH versus 10% borrowed ETH. All we’d see is that the market simply stops making minipools (or even starts exiting them) when the overall package is seen as unattractive

Recollateralization removes the necessity for NOs to exit validators. New NOs can still join via the “first n validators” mechanism. For ETH-only NOs, ultimately their decision will be driven by the no_share on offer, just like in the rework.

In terms of signalling, if we see many undercollateralized nodes then we can assume the collateralization requirement is too high. Similarly, if some percentage of NOs are at or near the max cap we can assume it is too low.

So we address both aspects of this, reducing the incentive to exit, and the blockers to joining.

What is no_share for nodes that are in the “first n validator” state and are collateralized with pDAO-owned RPL? If they exist, what happens with pDAO owned rewards?

We propose max_commission_rate-recollateralization_rate as an initial value, but this could be adjusted by pDAO. pDAO owned rewards are automatically restaked, just like a normal delegators rewards. pDAO-owned RPL could also be unstaked and added to treasury RPL.

Where do funds to collateralize “first n validators” with pDAO-owned RPL come from? Can we ensure we don’t run out of funds? If so, how? If not, how many “first n validator” nodes can we support? What happens after?

There is no need to purchase this RPL in advance - it is purchased using recollateralization_share of the validator rewards. Essentially the first n validators start in an undercollateralized state, and are then recollateralized as normal. This ensures that we do not run out of funds. The theoretical limit of this is the 22% self-limit.

Is recollateralization_share the only source of protol-owned RPL?

No. Protocol-owned RPL which is delegated will earn RPL yield.

How long will it take to bring a node back to minimum that has dropped to 5% for the proposed (or a range of options for) recollateralization_share?

This of course depends on the RPL/ETH ratio and ETH bond size of the NO’s validators. We can model this for a range of permutations of these values.

Please also note that staking rewards are automatically restaked. That is, RPL purchased with delegate_share is automatically restaked, further increasing the collateral on the node and accelerating recollateralisation. So the recollateralization rate is actually (recollateralization_share+delegate_share).

What happens when a node with protocol as “delegate of last resort” reaches the minimum?

Recollateralization stops. The protocol-owned RPL continues earning yield from delegate_share.

Can you provide estimates for how much RPL the pDAO would aquire over say 3 years, how that compares to total RPL supply and

Yes, we can model this. Have you already completed the same analysis for buy+burn/buy+lp options? If so please share and I’ll try to ensure we provide the data in a similar format to simplify comparison. This would be a nice thing to include in the shared model, if we build one together.

how much impact pDAO delegating to protocol-aligned operators could have?

This is ambiguous, please be specific so I can answer accurately - what do you mean by “how much impact”?

Can you share your research that shows there is further room to safely enhance capital efficiency?

Yes, I will ask our data analysts to prepare our internal research for publication.

You give an example of C(1)=4 ETH and p=⅔. Does your research show these values to be safe? Why did you call it an example? What is the proposal?

I called it an example because it is the curve used in the illustrations, and that specific p value might not be the one adopted after community discussion. There are valid arguments to be made for different values, similar to the ongoing discussion about the initial values to use for UARS.

Given that the scope is already big here and anything we can do to reduce it (or areas for disagreement) is a win, let’s defer this part for future discussion, and exclude it from our proposal for now.

The bond curves described in RPIP-42 already provide significant increases in TVL, and combined with keeping the collateral requirement, this ensures we maintain robust security while optimizing growth.

You mention pDAO delegating RPL at variable commission rates here. Is this the “first n validator” idea mentioned earlier or in addition to it?

In addition. The goal of “first n validators” is to eliminate friction for new NOs. From their perspective, there is no collateral requirement.

The goal of pDAO delegation is providing subsidies to protocol-aligned NOs.

Where is pDAO RPL for delegating coming from? How does the proposal ensure enough funds or what happens if funds run out? Any estimate of size of this programm?

Yield from protocol-owned RPL. If it runs out there will be more RPL available in the next rewards period. Size depends on amount of RPL delegated. IMO it would not be necessary to bootstrap this program - so many nodes are currently undercollateralized that, if recollateralization were enabled, purchases from recollateralization_share would immediately begin acquiring protocol-owned RPL, which would immediately begin earning RPL yield.

How does prioritization of “lowest validator count”, “lowest no_share” or “highest staked or delegated RPL” work? What happens when these values change for operators in queue?

The simplest and our recommended approach is lowest validator count. In order to game this a NO would need to exit existing validators to reduce their validator count, just so they could rejoin the queue - very irrational. Using lowest no_share could also work - it would be slightly more susceptible to gaming, but the fact that no_share can only be changed once per rewards period might be sufficient mitigation.

In both cases, no action would be required if these values change. Validator count is impractical to manipulate, and no_share likely cannot be changed frequently enough to have a significant impact.

What are the proposed changes to how RPL collateral is used? What is the proposed penalty system?

To begin with, MEV theft as defined in RPIP-42, with the addition of RPL penalties. We would consider proposing making this a scaling penalty (more validators = larger penalty). In a future separate proposal we would also propose introducing performance penalties, scaled to have more impact on larger entities, with exceptions for small stakers. We are monitoring the research on correlation penalties - if there is a trustless way to recognise correlation penalty events we could increase their effect within RP to further encourage decentralisation for large NOs.

How does it interact with the delegation system? (Multiple delegators, potentially including pDAO, mix of delegation and self stake)

The NO and pDAO are treated equally to other delegators. Any penalties are applied to the total RPL delegated to that node, affecting all delegators proportionally to the amount of RPL they staked.

Thanks again for the questions. Please let me know if you have any followups.

First of all: what the actual fuck. Private discussions are private.

The disparity between public and private comments put me in a very awkward position, and I decided transparency was the best option. Let’s be clear: acknowledging a flaw in private and arguing against it in public is the poor form here.

If I am kindly spending my personal time giving you feedback on an early draft (which is what this comes from), I don’t expect to have it thrown in my face

It’s a minor detail, but again is misleading. The conversation was in fact in relation to the flaw in UARS, not something in our draft. Here is the quotation in context:

Again, if you think that quotation is taken out of context, please do post the entire chat, I am happy to be 100% transparent here. You did provide some feedback on the early draft in an earlier conversation, but this was from a later conversation about the rework.

I would rather be focussing on the pros and cons of our proposals than correcting these things but it needs to be addressed. You are in a trusted position in the community. People are relying on you to provide an accurate representation of each proposal:

Here you say that “It will be centralizing of everything. NOs and vote power”. That it would be centralising of NOs is your opinion - I believe the same about your proposal. That it would centralise vote power is false. We have discussed that delegation for rewards is not the same as delegation for voting. You are misleading people who are relying on you for an accurate representation of each proposal.

What you describe as “complex game theory bits” was in fact an obvious scenario that had already been considered. Running through the game theory here shows that the outcome is suboptimal for the NO and results in them sacrificing some profit which goes to pDAO, as I described here.

Here you see we’re “overpaying” everyone in the green and blue groups, most people in the orange group, and half the people in the red group.

The issue is not a misunderstanding on my part; rather, it is that the illustration entirely misses the point. This thinking is based on economically unsound ideas and is both simplistic and risky. Assuming that ideals alone can drive economic viability ignores the fundamental principles of market dynamics and cost structures.

The lines are drawn according to your subjective beliefs about the no_share values that would be acceptable to various staking cohorts. I am talking about the break-even point, below which it is unprofitable to operate.

One’s philosophy does not change economic reality. I care more about decentralisation than my neighbour, but we both pay the same for electricity. If the SSD in a NO’s NUC fails, Amazon does not give them a discount on the replacement based on how much they believe in decentralised L1s. On the other hand, the large entity who bulk-buys SSDs B2B certainly pays less per GB than the home staker buying on Amazon.

The green and blue groups in your chart might have the exact same financial break-even points, despite accepting different no_share values due to philosophy.

My concern here is that UARS ends up in a range that is unprofitable for home stakers, but profitable for large entities, leading to the centralisation of the validator set. I believe you are an intelligent person who understands there is a distinction between “no_share below the NOs break-even point” and “no_share that Val thinks NOs will accept due to their philosophical alignment”.

Without making this distinction it is impossible for you to perform a fact-based risk analysis.

Fwiw, this is not the first time the proposal authors have been informed of this risk (from NodeSet’s review):

An unstable economic contract will drive out smaller operators with higher operating costs than large, centralized providers.

You have presented your opinion on this multiple times. This opinion is based on your subjective belief of what no_share various groups will accept, when it should be based on their break-even points. Repeatedly stating your subjective beliefs on this matter is a distraction that prevents discussion of the actual issue.

Lacking a functional method of price discrimination (the economics term), there’s not a good way to pay everyone the “correct” amount. Where by correct here, we mean to squeeze every penny out of them so that they are barely choosing to be NOs.

This is not about price discrimination. It is about ensuring that our most-desired NOs are able to operate profitably. Please address the actual issue: the profitability of each group. The relevant economics terms here are “break-even” and “operating loss”.

I am no longer treating this as a good faith discussion.

In my opinion you have not been treating this as a good faith discussion for quite some time. Almost three weeks ago you insinuated that my suggestion that we try to capture subjective entry ticket value was motivated by a desire to take advantage of people:

Surely you’re not arguing that the form of value capture we should aim for in year 2 is that people in year 1 paid too much and got burned?

You did this again in the message I am replying to, where it looks to me as though you are trying to imply that the attempts to get you to acknowledge that different break-even points exist is an attempt to “squeeze every penny out of them so that they are barely choosing to be NOs”.

These are just two instances where your behavior has failed to meet my standards for good faith, and this doesn’t even cover the numerous times I’ve had to remind you to stay factual. I’m not the first person who has found it necessary to ask you to maintain professionalism in tokenomics discussions.

Here you say that I am treating this as an academic exercise and not treating the protocol seriously Discord

When I say it is like an academic exercise I do not mean this in an offensive way. I tried to make this clear with the train example. In this type of exercise you can typically change the answer by modifying one variable (in this case, train speed), and the scenario ignores things that exist in the real world: delays caused by bad weather or drivers showing up late, signal failures, and so on. In an academic exercise we can ignore these things. In the real world it is more complicated.

Consider a physics problem where you calculate the trajectory of a projectile, ignoring air resistance and wind. In reality, these factors significantly affect the outcome. Or think about a financial model that assumes perfect order execution without accounting for slippage.

Your view that subjective entry ticket value can be captured by changing a global parameter ignores real-world complexities such as scarcity value and expectations of future growth. While I believe you want to treat the protocol seriously, you are approaching the problem with overly simplistic models, exposing Rocket Pool to significant risk.

This is unacceptable behavior, and I will not engage further with you.

Fair enough. From my side, I think I have consistently been professional, polite, and engaging in good faith. The discussions have taken place in public and everybody can draw their own conclusions. People can also look back at your previous interactions with those providing feedback on your proposals, and decide for themselves if there is a recurring pattern here.

Fwiw, I genuinely am not attacking you personally and have no intention of hurting your feelings. My goal is to have a productive and constructive debate about tokenomics. In debates, it’s easy to feel personally attacked when, in reality, it is the ideas being challenged. I also understand and appreciate the significant time you’ve invested in the rework, and I know that criticism of it can feel personal.

This is the second time our conversation ended abruptly when I asked probing questions about the flaws with UARS. I know that we’re all human and our emotions can sometimes get in the way. Still, these critical questions seem to lead to an emotional response, which distracts from addressing the core issues. I appreciate we’ve been going head to head for a while, and hope we can reset and continue with a productive discussion.

And now you have posted private communications of ours publicly, perhaps in some kind of attempt to embarrass me?

My intention was not to embarrass you. My intention was increased transparency.

it’s very likely that the hobbyist and high ethos groups have lower averages than centralized entities.

I agree overpaying the big guys is a slightly sad side effect, but I don’t see a way to avoid it.

If I was publicly arguing “it’s very likely that hobbyists and high ethos groups will receive more delegation than centralised entities”, and then in DM wrote “I agree the big guys getting more delegation is a slightly sad side effect, but don’t see a way to avoid it”, I have no doubt you would feel the community deserves to know.

There is a lot in my response. If you feel like engaging further I have a few questions:

  1. Do you recognise that there is a difference between “break-even” and “no_share that someone thinks each group will accept”?
  2. If so, when you say “it’s very likely that the hobbyist and high ethos groups have lower averages”, are you referring to their average break-even point or the average lowest no_share you think they will accept?
  3. Do you feel that you have adequately assessed the risk that UARS stabilises in the zone where no_share is profitable for centralised entities but nonprofitable for home stakers?
  4. If UARS were to stabilise in the zone where no_share is profitable for centralised entities but nonprofitable for home stakers, and an increasing portion of the new validators were from centralised entities, what mechanisms in the rework would you use to attempt to reverse this trend?
  5. Apart from the “reduce the no_share” self-defence mechanism, does the rework include any other provisions to help reduce the likelihood of centralised takeover?
  6. What percentage of NOs do you estimate would leave the protocol if no_share were reduced to zero? (Ideally broken down by cohort, but single blended value is fine)
  7. What percentage of NOs do you estimate would remain with RP if no_share were below their break-even point? (Ideally broken down by cohort, but single blended value is fine)
1 Like

It is not formally noted anywhere and therefore may have escaped your attention but the DAO forum and Governance channels on the Discord have a considerably higher expectation of serious and thoughtful discussion than the discords #trading channel.

#trading is first and foremost a community hangout and shitposting venue. In such an environment it is not reasonable to expect people to carefully consider every statement or contort themselves into forced politeness. The same is true of casual private conversations undertaken in a non-official capacity, you can not expect people to weigh every word carefully.

In my personal opinion posting largely irrelevant snippets from these places in an attempt to discredit or weaken a persons thoughtful, measured and reasoned posts on the more serious discussion platforms is very unprofessional and just plain bad form.

Forgive me for the off-topic post, I do think it is great that 1kx is engaging with the tokenomics discussion however I believe there are better ways to advocate the merits of your proposal than through personal and antagonizing remarks directed at contributors who have differing opinions.

5 Likes

Thank you for sharing your opinion. Knoshua’s post was indeed thoughtful, measured, and reasoned, and me highlighting his alternating communication styles does not detract from that.

there are better ways to advocate the merits of your proposal than through personal and antagonizing remarks directed at contributors who have differing opinions.

I completely agree, but that goes both ways. My goal in highlighting Knoshua’s personal and antagonising remarks was to hopefully encourage him to stop doing it and focus on his concerns with our proposal.

Some follow-up questions.

So if I have RPL staked on my own node and then lower no_share below maximum to attract delegation from others, I start paying extra_reth_share on my own RPL?

Why require smoothing pool for using delegation?

Consider the following example with no recollateralization_share active:

  • max_commission = 14%
  • no_share = 7%
  • delegate_share = 3.5%
  • extra_reth_share = 3.5%

Then activation of recollateralization_share of 4%, would result in:

  • no_share = 5%
  • delegate_share = 2.5%
  • extra_reth_share = 2.5%

Am I understanding correctly?

You mention reward period multpile times in relation to delegation and splitting of rewards, but that only covers EL rewards, how do BC rewards work?

This doesn’t match what the proposal says:

To further simplify the process for NOs, new NOs will not be required to provide collateral in advance for their first n validators. They can launch their validators without staked or delegated RPL, and their node will be collateralized with pDAO-owned RPL.

You are now saying the node would start with 0 RPL and it wouldn’t be collateralized with pDAO-owned RPL?

If I start out using the first n validators that don’t require RPL or RPL delegation, what is my path to getting vaidator #n+1 through delegation? Do I need to attract enough RPL delegation to cover all n+1 validators (minus what has been accumulated through recollateralization)?

I am confused by this answer. Could you give a detailed breakdown of all shares for an example like I did above?

Does the protocol-owned RPL stay staked forever?

Does your research show the values of C(1)=4 ETH and p=⅔ used in illustrations to be safe?

So both ETH and RPL are used for MEV theft penalties? In what order? RPIP-42 defines penalty size as theft size + 0.2 ETH. When you say scaling based on number of validators, is that in addition to scaling with theft size or instead of? What’s the rationale?

In searching the discord the first mention I saw from you that include “central” was just a few days ago on July 11th

You missed at least one :wink: Here is one from July 9th, where I am describing an attack vector that would exist today if the collateral requirement did not exist.

A quick search showed you mentioned “central” over 60 times in the new post, and it was the topic you were most eager to debate in the discord…

I am eager to discuss all of the proposal, but centralisation risk is an important topic, and it has become increasingly clear that the RPIP-49 authors have not fully considered the risks here.

Support for decentralisation is such a key part of what distinguishes RP from other LSTs, so those concerns are front and centre in our proposal, as are the concerns regarding the lack of protection RPIP-49 offers against centralisation of the validator set.

I was surprised to read your post and find such a strong emphasis of “centralization concerns” around rpl.rehab.

Why? I have been very clear about my fears that UARS will lead to centralisation. We are not the first group to raise these fears. I’m not sure why you would be surprised that discussing the centralisation risks of RPIP-49 requires us to use the word “central”.

Your new post shows a drastic shift, from optimism around your proposal to pessimism of the existing proposal

It seems you are mixing up parts 1 and 2 of our post. Part 1 describes why we believe it is the best direction for RP, and we remain optimistic about this approach.

Part 2 of our post describes why we think RPIP-49 would ultimately lead to negative outcomes for RP. We believe RPIP-49 will lead to significant reduction of RPL price and the centralization of the validator set. So yes, we are very pessimistic about RPIP-49, and the apparent lack of risk analysis that has been performed by the RPIP-49 authors has only increased this pessimism.

reductions in staked RPL/downward pressure on price
threatens to the stability/value/security/integrity of the entire governance system

We have already added some research to help quantify the “Will RPIP-49 lead to unstaking” question in the steel man doc. We believe that the rework will lead to RPL being unstaked, leading to additional sell pressure. Nobody would dispute that reducing the cost of the governance token reduces the cost of governance attacks.

Extreme centralization risk of RPIP-49

“extreme centralisation risk” is a quote from NodeSet’s review of the tokenomics, when they highlighted these issues in March. We share their concerns and do not think they were adequately addressed by the proposal authors when first raised. Here are two examples of that from the same thread:

  1. Valdorff dismissed the concerns as “pure FUD”. Wander responded with an answer to that question and Valdorff did not comment further in the thread.

  2. You responded that ETH-only NOs would not have pDAO votes, and the ability to reduce no_share would be enough to prevent centralised takeover, to which Wander replied:

The centralization concern isn’t for governance, it’s for key-man risk and consequently rETH safety. If Coinbase has 90% of validators, then rETH is essentially cbETH with some extra complexity (risk).

You did not comment further in the thread, so I am not sure if Wander’s explanation of the risks changed your thinking at all here.

The same concern was also raised by other community members around the same time. From @rocknet’s rationale of their concerns about the rework’s effects on decentralisation (context):

If we do this to compete with other more centralized players, we risk becoming an extension of the same players, driving more stake into the most well capitalized, playing into the worst criticisms of proof of stake consensus

A single large staking entity could migrate half of their current natively staked validators and create over 100,000 1.5ETH minipools, lifting over 3,000,000 rETH by themselves. Is that what we want?

In my view, these are valid concerns which have not been addressed. This makes us more vocal about the centralisation risk - if we believed they had been adequately considered by the proposal authors it would not be necessary to raise them now.

how should the DAO go about determining this?

You might be familiar with the Swiss cheese model of risk analysis. The idea is that no single defence mechanism can be 100% effective so we must use multiple layers to improve the overall security. We should use the same approach here.

While there are already some options available (e.g. Gitcoin Passport), it is an open research area which has not been fully solved. The point here is that a) we do not need to design a 100% effective system in order to provide an improvement over RPIP-49, and b) we can innovate and iterate over time as the solution space is further explored. More on this below.

What’s to stop centralized entities from pretending to be solo stakers to attract delegation?

One of Valdorff’s arguments against delegation is that big names would receive too much delegation. In order for this to be true, those entities would need to self-identify. So, if we accept Valdorff’s logic, we conclude that many of the larger entities will have more to gain from self-identifying than pretending to be a home staker, and will therefore self-identify.

As for the large entities who choose not to self-identify, in a permissionless system it is not possible to prevent them from attempting. Their ability to deceive pDAO into delegating to them depends on their ability to bypass the identification methods we implement.

It sounds like a nightmare to have our discord flooded with users seeking out delegations,

Imagine having our discord flooded with messages every time someone deploys a minipool or stakes RPL :wink: Yes, that would be annoying, so those messages go in the #events channel.

This is a non-issue. We can just make a channel for delegation requests and warn/ban anyone who requests delegation outside of this channel. Coordinating in discord is already suboptimal compared to the delegation dashboard, expecting a “flood” might be an overestimation.

I see no way to differentiate between the “solo staker” types we really want, and those just pretending in order to acquire delegations to subsidize more validators.

Valdorff has expressed similar sentiments:

Remember that we can only tell centralized entities apart if they wish us to tell them apart.

Have you considered the implications these statements have for no_share as a defence mechanism against centralisation?

It seems your view is that, if centralised entities are >x% of the validator set, pDAO will activate the self-defence mechanism and begin reducing no_share until they leave. In order to know whether centralised entities are >x%, you will need a way to distinguish between them and the home stakers.

When you are evaluating whether pDAO should activate the self-defence mechanism, what criteria will you be using to differentiate between centralised entities and home stakers? Any criteria you come up with should work equally well under our proposal.

Let’s use consistent logic here. If you can distinguish between centralised entities and home stakers in order to make the self-defence decision, we can use your criteria for our proposal. If you cannot distinguish between the two, it would be impossible to know when no_share reduction should be activated, rendering it even more useless as a defence mechanism.

jump through hoops more easily than home stakers could (the previous Aave borrowing/self-delegation approach being one such example)

As I described in my last response to you, the outcome of anyone performing this “attack” is increased validator count, increased rETH rewards, increased buy pressure for RPL, and increased RPL yield for pDAO. If you disagree with my response please say why and we can discuss it, otherwise please stop alluding to it as though it were a valid attack vector. It is misleading.

The explicit goals under the rework is to make Rocket Pool the most attractive venue possible for solo/home staker types. We try to open the tent as wide as possible to allow the largest set of NOs to participate.

When you “open the tent as wide as possible,” it is equally open to both large and small stakers, thanks to the design of UARS. However, large entities can afford to put a lot more ETH through the tent door. It seems you believe that once the tent is wide open, we will see more home stakers than centralized entities. If so, do you have data to support this hypothesis?

Instead of a tent, imagine we have a warehouse. We are opening up the warehouse doors as wide as possible and inviting people to take as much of our product as they can afford. Some people will be scraping together spare cash to afford one or two items, carrying their purchases home by hand. Others will turn up with a Black AMEX and a fleet of rented trucks. If the hope is that the majority of the products will go to the former group, the outcome might be disappointing.

Another defense of the proposal is that enabling growth in the protocol protects it by creating a higher cost to attack it. As knoshua put it: “I can take over a protocol with 2 validators. I can’t take over a protocol with 200k validators”.

Knoshua’s comment ignores a fundamental point. I’ll repeat my reply here. The issue is rate of change over time, not absolute numbers at a single point in time. Let’s say we start on day one with 100% home stakers. Every day, we get x new home stakers, and y new centralised stakers. If x<y, after one year would the percentage of home stakers a) increase, b) decrease, or c) remain the same? Of course the answer is B, and if the rate of change favours centralised entities then over time they will become an increasingly large portion of the validator set. Please think long term here.

I think the proposals you put forward add friction to the system, which stunts growth and deters solo/home-staker types from easily scaling.

Are you accounting for the fact that the first n validators do not require upfront collateral? If so, I find it difficult to see how this adds friction or stunts growth when it is, from the NO’s perspective, functionally equivalent to having no collateral requirement.

For small values of n, home stakers are more likely to have <n validators than large entities. This means that the people we wish to attract the most experience no friction when launching their first n validators, which is exactly the outcome we want.

Under your proposal every node could end up potentially having different: NO_shares, Delegate_shares, Recollateralisation_share, and Extra_rETH_shares

recollateralisation_share would be a global param, as would the ratio between delegate_share/extra_reth_share, but other than that you are correct. And this is a good thing.

It’s what will encourage competition between NOs, which helps rETH APY, and allows pDAO to provide incentives for protocol-aligned stakers.

What is the rETH total commission? How much do NOs earn? How much does your delegated RPL earn? How much is protocol owned? The answer to all the previous questions is “it depends”, and “it depends on tons of moving variables across all of the validators”, and requires participants to be highly aware of pvp dynamics

Did you ask yourself the same questions about RPIP-49? The answers also depend on multiple moving variables.

Q&A for both proposals

What is the total rETH commission?

1kx
  • The upper bound is set by max_commission_rate (set by pDAO). The lower bound is a function of the no_share selected by each NO.
  • Both of these values can change over time.
RPIP-49:
  • It is (node_operator_commission_share + surplus_share + voter_share).
  • All of these values can change over time.

How much do NOs earn? (Ignoring variability in ETH staking yield, performance, MEV, etc.)

1kx
  • It depends on the NO’s no_share, which they set themselves.
  • If max_commission_rate is changed the NO might need to choose a new no_share value.
RPIP-49:
  • It depends on the prevailing no_share, which is set globally.
  • Understanding the factors which affect no_share requires reading RPIP-46 in its entirety. Some of these factors are:
    • node_operator_commission_share, surplus_share, increase_no_share_seal_increment, increase_no_share_seal_count, and allowlisted_controllers MAY be updated by pDAO vote
    • node_operator_commission_share and surplus_share, MAY be updated by an address in the allowlisted_controllers array
    • The security council SHALL have a limited-use power to increase the node_operator_commission_share by increase_no_share_seal_increment and decrease the surplus_share by the same amount

How much does your delegated RPL earn?

1kx
  • It depends on the total amount of RPL staked to that NO, whether that NO is undercollateralized, and the prevailing delegate_share offered by the NO you have delegated to.
RPIP-49
  • It depends how much RPL is staked overall, and how much RPL you have staked relative to other NOs, and the prevailing voter_share.
How much is protocol owned?
1kx
  • Somewhere between 0% and the upper bound set on protocol-owned RPL purchases.
  • It depends on the duration for which NOs were subject to recollateralisation, and the prevailing RPL/ETH price at the time of the recollateralization purchases.
RPIP-49 (I am mapping this question to the buy+lp scenario as it is closest)
  • The lower bound is 0% (assuming surplus_share=0).
  • The upper bound depends on surplus_share, and the prevailing RPL/ETH price at the time of buy+lp purchases.

As we can see, when we apply your questions to RPIP-49 the answers are also “it depends on prevailing protocol parameters”. I could easily make the argument that delegation settings are simpler to grok than UARS. But fwiw I do not see any value in this argument - “in order to calculate my ROI, I need to know the prevailing protocol parameters” is true for all yield-producing protocols, even Ethereum itself (what’s the APY? Depends how many validators there are). I’d be surprised if anyone could name a single yield-producing protocol where this is not the case.

Ultimately I don’t think anyone should be picking who is right and wrong for Rocket Pool, this goes against the permissionless ethos Ethereum champions and Rocket Pool followed

I agreed when you posted this in Discord, and afaik you never responded further. Please clarify your intent with this statement, because it could be interpreted as a repeated attempt to insinuate our proposal goes against permissionlessness.

Do you mean to imply that a proposal which attempts to be benefit smaller NOs over large ones is “picking who is right and wrong for Rocket Pool”? You have expressed a desire to “subsidise the little guy”. RPIP-42 does too:

This proposal also explicitly tries to benefit the smallest NOs in a few ways, in line with the pDAO charter values of decentralization and prioritizing Ethereum health

Again, let’s use consistent logic. If helping the smaller NOs is a bad thing, then you should oppose both proposals, as both include this as a stated goal.

Questions about RPIP-49

As mentioned above, the centralisation risks of RPIP-49 are one of the main reasons we oppose RPIP-49. To help better understand these risks could you please answer the following questions?

  1. What criteria would you use to decide whether or not to activate the self-defence mechanism to prevent centralised takeover? i.e. how would you identify centralised entities in order to know whether they are >x% of the validator set?
  2. If you were unable to identify centralised entities, and therefore unable to know when to use the self-defence mechanism, would it change your opinion of its viability as protection against centralisation?
  3. Do you have any data which indicates that, once the “tent door” is opened, the majority of stakers will be solo/home staker types?
  4. Apart from the “reduce the no_share” self-defence mechanism, what provisions does RPIP-49 include to help reduce the likelihood of centralised takeover of the validator set?
  5. As a NO, how long would you remain with the protocol if no_share was set to 0%?
  6. It seems you think our proposal will harm permissionlessness. Under our proposal, from whom do you think NOs will need to request permission before launching a validator?
  7. Related to my questions above, do you believe there is a difference between “no_share that is profitable based on the NO’s break-even point” and “no_share that Valdorff thinks each group will accept according to their level of Ethereum alignment”? If so, which do you think will be the primary motivating factor for the majority of NOs?
  8. If UARS were to stabilise in the zone where no_share is profitable for centralised entities but nonprofitable for home stakers, and an increasing portion of the new validators were from centralised entities, what mechanisms in RPIP-49 would you use to attempt to reverse this trend?
  9. Imagine a centralised entity launches a significant number of validators and pDAO decides to begin reducing no_share, as you described here. Onchain activity and social media show a large number of home staker types exiting the protocol, but the centralised entity has not left. In this scenario, which of the following do you personally believe would be the best course of action? a) continue reducing no_share, b) begin increasing no_share, or c) leave no_share unchanged
1 Like

Hi Langers, thank you for the review and detailed response.

We recognize that delegation doesn’t remove all friction and that this could impact growth. However, RPIP-49 eliminates the collateral requirement and introduces a very real risk of centralised takeover of the validator set.

Although the rework might accelerate TVL growth, if 90% of this growth comes from centralized entities, Rocket Pool will no longer be a decentralized liquid staking protocol. The flaws in UARS only exacerbate this risk, as I will describe below.

As such we believe our approach, while not removing 100% of the friction, ultimately leads to better outcomes for all members of the Rocket Pool ecosystem.

Token buys

I do have some concern that the protocol will end up buying a lot of its own token due to under collaterised nodes. If RPL earns decent yield within the protocol without running a node we may exacerbate the liquidity issues that currently exist.

I recognise the concern, but this also applies to using surplus_share for buy+burn or buy+lp. Both proposals include mechanisms to mitigate this - our proposal through changing the target collateralization ratio, and RPIP-49 through adjusting surplus_share.

Regarding “without running a node”, there might be a misconception here: the only way for RPL to earn yield is if it is delegated to an NO, meaning there is no yield without a node being run. Here it is the same as a polygamous whale marriage - one party providing the ETH, multiple parties providing the RPL.

Additionally, there is a further benefit: One NO reported on Discord that they would be interested in delegating to other NOs if their node was collateralized to the max cap. Delegation allows NOs to earn yield on RPL that would otherwise be idle capital.

Voting

I am also not sure how RPL works as a governance token with delegation by default.

Rewards delegation would be separate from voting delegation. A delegator could earn rewards from the NO offering the highest returns while delegating their vote to the NO who aligns most closely with their philosophical views. This separation is crucial, as the answers to “who will pay me more” and “who best represents my views on the long-term health of RP” may differ. This distinction helps protect against centralized governance takeover, ensuring that the NO offering the highest rewards doesn’t necessarily accumulate the most voting power.

We have seen in our own delegate voting system that delegations are attracted to known entities only
Delegated RPL stake would end up centralising stake on known / probably doxed entities

I do think the distinction between “known entity” and “doxxed” is important. Looking at the existing delegates, many are anon but known entities.

While voting delegation tends to favor those popular in the community, the market dynamics of delegation for rewards are different. Rewards are shared among all delegators to a NO. If everyone delegates to the most popular NO, their rewards are diluted. Therefore, there is a direct financial incentive to delegate to NOs with less delegation, an incentive which does not exist with voting delegation.

I wouldn’t say that it is a robust mechanism though, because it would still be very possible
In fact, if you do not decouple RPL as a collateral it ends up being worse. A centralised entity could capture the validator set and by default governance.

Delegating one’s RPL to a NO does not delegate one’s vote to that NO. Thus, even capturing the entire validator set would not grant them governance powers.

The mechanism remains the same as it is today and under RPIP-49: Someone seeking to control governance would need to buy or borrow enough RPL, or convince enough people to delegate their votes to them (which is separate from “reward delegation”), to push through proposals.

A centralized entity that controls 90% of the validator set can make viable threats against the protocol. The same issue was raised by NodeSet in March. We do not believe adequate consideration has been given to this risk.

We believe that removing the collateral requirement will result in significant amounts of RPL being unstaked (please see steel man doc for our model). If this unstaked RPL is sold on the market and leads to a price reduction, the cost of governance attacks is reduced.

We believe that retaining the collateral requirement, combined with automatic RPL purchases as a result of recollateraliasation and delegate_share rewards, provides increased buy pressure for RPL, which could lead to an increased RPL/ETH price. If this belief is correct, then our proposal would actually make the protocol far more resistant to governance attacks:

  • The attack would cost more if RPL/ETH price is higher
  • Protocol-owned RPL removes RPL from the market, reducing the available supply, further increasing the cost of governance attacks
  • Staked RPL is also not available to the market, further reducing the available supply, further increasing the cost of attacks

Also, please note that the worst case scenario here - 100% of delegation goes to centralised entities, allowing them to capture as much of the validator set as they can afford - is the default scenario under RPIP-49, which provides no mechanism to disincentivise centralised entities without also disincentivising home stakers.

The only defence offered in this regard appears to be “start reducing no_share and hope the centralised entities leave before the home stakers”, which we do not consider to be a viable protection mechanism.

Friction and Competitiveness

I believe that seeking RPL delegation would add considerable friction that Lido does not have.

“Seeking RPL delegation” just means setting a competitive rate in the smartnode interface, and does not have to be a labor-intensive process.

As @rocknet put it:

If we do this to compete with other more centralized players, we risk becoming an extension of the same players

I acknowledge there is some friction compared to Lido, but this is the cost we pay to protect decentralization.

a considerable amount of our TVL comes from individual node operators that have a large amount of ETH. This maybe a consequence of trade off, but we would be uncompetitive in this space; these node operators would favour Lido and contribute considerably to increasing their TVL.

This is true under either proposal: RP’s competitiveness for ETH-only NOs depends on the prevailing no_share. In both cases, the NO is giving up some share of their rewards (delegate_share/extra_reth_share in our proposal, voter_share/surplus_share in RPIP-49). If RP offers a competitive no_share, RP will attract NOs over Lido. If no_share is unattractive compared to Lido, we can expect NOs to migrate.

Additionally, we can allow existing NOs to migrate without meeting the minimum collateralization ratio, ensuring there is no friction for this group.

For the free-collateral node operators, it looks like the recollaterisation amount comes out of their node commission, is that right?

Yes, that’s right, similar to taking voter_share/surplus_share under UARS.

Node operators may have more ETH to stake but unable to attract delegation, causing a TVL stall.

If an NO cannot attract delegation, it is likely because their no_share is too high or the protocol parameters controlling the split between delegate_share and extra_reth_share are balanced too far towards extra_reth_share.

Mercenary capital is a flight risk that can make our ability to scale less predictable.

This issue is at least partially mitigated by the fact that mercenary capital exiting does not affect existing validators—there is no need to exit validators in response. When mercenary capital enters, it facilitates a growth in validator count, increasing both TVL and rETH yield. When it exits, the increases to TVL and rETH yield remain. The only difference is that after they leave, recollateralization_share might result in increased RPL purchases, potentially benefitting RPL price.

Low RPL prices push many people under the 10% ratio so they cannot deploy more ETH

This is a risk. However, they have multiple options: wait for automatic recollateralization, wait for more delegates, or provide their own RPL.

Moreover, lower RPL/ETH prices would result in recollateralization_share purchasing more RPL, removing more supply from the market. This could accelerate the recovery of RPL/ETH, which in turn would speed up recollateralization. Please also note that RPL rewards are automatically restaked - that is, the recollateralization rate is (recollateralization_share+delegate_share).

As you mentioned, the free-collateral mechanism is gamable so it would depend on the trade off between holding 10% borrowed ETH collateral in RPL and the bond curve returns. It would be good to see analysis on this. My intuition says that a significant number of node operators would rather game the system and have marginally less yield.

Sure, we can provide some analysis here, we would need this to identify optimal recollateralization_share rates too.

The trade-off would be between a) holding 10% of borrowed capital as RPL, b) giving up some no_share to attract delegation, and c) bond curve returns. The middle option adds flexibility and moves us away from the binary “hold 10% RPL or sacrifice rewards” issue we see today. Economically it is similar to RPIP-49 from the NO’s perspective, but it does not abandon the collateral requirement.

We’ve considered gaming scenarios, similar to the “borrow RPL from Aave, stake, scale up, unstake” strategy. While possible, this is financially suboptimal, so rational actors are likely to choose the highest yield approach. Even if they don’t, TVL, rETH APY, and protocol-owned yield all increase.

The sock puppet NO pays more in gas, is less profitable, and some of that sacrificed profit benefits the protocol. Preventing this would require a permissioned process, which is obviously out of the question.

In practice they will have to compete with other node operators to attract RPL delegation. The design intentionally minimises node commission, which is great for rETH but means our node commission may be noncompetitive compared to Lido.

Sure, but it is possible for no_share to be noncompetitive compared to Lido under either proposal. If the RPIP-49 total commission were too low it would be great for rETH but node commission would be noncompetitive compared to Lido.

If UARS ends up at a point where no_share is noncompetitive with Lido, this rate is imposed on the entire validator set. If it is noncompetitive to Lido we can expect NOs will leave. If the delegation market results in a prevailing no_share that is noncompetitive with Lido, we can expect NOs will leave. So in both cases it is about finding the values that keep both demand and supply side of RP’s market competitive.

The worst case is that we end up in a race to the bottom and we price out node operator supply, particularly individual home stakers. I like the idea that market dynamics can find an equilibrium but practically we have to compete on both sides of our market.

This is another concern that applies to both proposals, albeit with different market dynamics. We believe this race to the bottom is more likely with RPIP-49. Under UARS, if RP is popular with NOs, no_share will be reduced in response to increased NO supply. It’s possible that it could be reduced to a point where it’s unprofitable for home stakers but still profitable for large entities. The equilibrium found by UARS might result in home stakers leaving the protocol due to unprofitability, leading to the gradual centralization of the validator set.

  • Assumption 1: On average, centralized entities have lower costs than home stakers and will therefore be profitable at a lower no_share.
  • Assumption 2: Home stakers making a loss will eventually leave the protocol.
  • Assumption 3: New stakers will only join if they expect a positive ROI.

If we accept Assumption 1, it implies a “danger zone” for no_share where it is unprofitable for home stakers but profitable for centralized entities.

Consider a scenario where RP is popular with NOs and the NO queue is >1000 deposits for four weeks. The pDAO follows one of the example guidelines in RPIP-46 and reduces no_share to manage NO supply.

  • As no_share is reduced, it enters the danger zone. While in this zone, the protocol is (on average) only profitable for centralized entities.
  • Home stakers will be the first to leave, increasing the centralization of the validator set.
  • Home stakers will be less likely to join because it would be unprofitable for them. Most or all new stakers will be centralized entities, further increasing the centralization of the validator set.

If no_share remains in this danger zone for too long, centralized entities could become the majority of the validator set.
We view this as a fundamental and unaddressed flaw in UARS, and highlighted it here.

the pDAO was the delegate of last resort we would not be able to tell the difference between a small operator and a large operator

pDAO as “delegate of last resort” is part of recollateralisation_share, which would be available to all NOs (it is automatic and there is no automated way to differentiate between these groups).

Separate from that is pDAO delegation, which allows pDAO to give small operators a boost compared to centralised entities. We believe there are ways which pDAO could differentiate between small operators and large operators. None of these are 100% effective, but they do not need to be in order to provide an improvement over RPIP-49. Please see my response to Samus (“Swiss cheese model”) for more thoughts on this.

I think it is better that the base layer stay as permissionless as possible and delegation is achieved higher in the stack (Nodeset, RocketLend, etc).

This proposal is a well thought out solution but frankly the core delegation system I don’t see helping us compete or stay permissionless.

The issue with delegation is that it ends up relying on identity. […] This is extremely important to ensure the protocol is permissionless and open.

I completely agree with the importance of keeping the protocol permissionless and open, and our proposal does not change these attributes. Under our proposal nobody would need permission to become a NO.

I fully acknowledge that Waq would find it easier to get delegation than NewDiscordUser1234, but this is separate from being permissioned or permissionless. The “first n validators” mechanism addresses this issue for new or anonymous NOs. The fact that this mechanism is gamable, and we cannot prevent it, demonstrates that permissionlessness remains intact.

Our proposal would keep the base layer just as permissionless as it is today:

  • Small NOs can join with no delegation (and large NOs can game this, at the cost of efficiency)
  • There is no requirement to doxx oneself, register personal info, undergo KYC, etc. in order to receive delegation. Just register your node as normal, and you are ready to receive delegation. The only identity required is an Ethereum address.
  • Anyone can borrow/purchase RPL and completely avoid the delegation system

There is no need to get permission from anyone before becoming a NO, therefore it remains just as permissionless and open as under the current system.

Removing RPL as a collateral is a calculated risk

I agree it is a risk, but have yet to see any evidence that it is calculated :wink: Happy to be proven wrong if there is some data to support this belief, but to me it seems like RPIP-49’s approach to preventing centralisation is just “open up the tent doors and hope for the best”.

By doing this we attract the long tail of node operators that will ensure Ethereum is permissionless, open, and credibly neutral.

The permissionless/open issue came up a few times in your response. I hope I have demonstrated that our proposal would not impact permissionlessness, but if you have any remaining concerns here please let me know.

Thanks again for your review and feedback.

2 Likes

Hi Mike,

The 1kx proposal maintains that the DAO (or RPL holders generally) should try to delegate more share to “protocol aligned NOs”. I asked how should the DAO determine “protocol aligned”? To summarize, your response was:

  1. “We can’t, but we can try (swiss cheese of several methods)…” ok, so what methods will you try?

  2. “already some options available (eg, Gitcoin passport)”… I don’t see how that would help, nothing stops a centralized entity from sock-puppeting this

  3. “They have more to gain by self-identifying”… Ok, so that is centralizing then (self-identified centralized entities benefit the most)? Or they don’t have more to gain by self-identifying, in which case they do have incentive to sock puppet…

  4. “Their ability to deceive pDAO into delegating to them depends on their ability to bypass the identification methods we implement”… What methods?

The entire 1kx proposal thesis (and concerns around RPIP49) are that a delegation system would be better since it benefits “decentralized/protocol aligned NOs”, but critical first steps have to be:

  1. Identifying those actors

  2. Filtering out pretenders/fake actors

Please provide some examples of how the 1kx proposal could implement those steps, beyond “swiss cheese of many methods” and “eg, Gitcoin passport”.

There is also the challenge that even if you accomplish those steps and it is somehow possible to know what RPL delegators “should do”, there is no way to “force” RPL holders to choose what they “should do” over financial incentives to do something on the contrary.

Skipping ahead to elsewhere in your response…

This is a non-issue. We can just make a channel for delegation requests and warn/ban anyone who requests delegation outside of this channel

It is more nuanced than that. Right now there is no incentive to sock puppet discord accounts and pretend to be multiple anons. If you instead incentivize this behavior as accounts try to “build social capital” for attracting delegations – you could cause more headache and noise than actual signal/substance in the discord. Centralized entities could attempt to build credibility through sock puppeting many anon accounts (all outside of actually requesting delegation in a “special channel”).

It seems your view is that, if centralised entities are >x% of the validator set, pDAO will activate the self-defence mechanism and begin reducing no_share until they leave

I never said “if centralized entities are >x% of the validator set”, I’m not sure where you got that idea. “Decentralization” isn’t necessarily: centralized entities are <x% of a set, I think an alternative method of quantifying decentralization is counting the total number of decentralized participants. Again, back to the best example of a decentralized network I can think of (Ethereum). Ethereum is estimated to only have ~6.5% solo stakers (Solo stakers: The backbone of Ethereum — Rated blog), but this “small percent” is a long tail of node operators that are critically important to censorship resistance and the overall health of the network. A single solo staker swings way above it’s weight class in it’s benefits to Ethereum compared to just 32 more ETH through Coinbase.

A simple example is that I think a network with 6.5% of it’s stake split among ~10,000 independent solo stakers (Ethereum - How Many Solo Stakers? — GLCstaked) out of ~14,000 validating nodes (https://www.theblock.co/post/285262/ethereum-one-million-validators), would be a much more robust “decentralized” network than a hypothetical network with 50% of it’s stake split among 50 independent solo staking nodes out of 70 validating nodes, even though 50% is > 6.5%.

I find it remarkable that if there are ~10,000 independent solo staking nodes on Ethereum, Rocket Pool accounts for ~1/5th of that node operator set, and ~1/7th of the total validating node operator set on Ethereum, even though Rocket Pool only makes up ~2.2% of staked ETH (https://dune.com/hildobby/eth2-staking).

Through the RPIP49 proposal, by lowering the barrier to entry we can hope to bring more home stakers online, which provides great benefits to Ethereum as more home staker nETH can lift even more staked pETH.

On the point about RPL, I disagree with it being used as a barometer for decentralization, but I’ve heard 1kx and NodeSet (previously) try to make that assertion. My point in the post you linked was that: if I did accept that assertion, (“ETH-Only = bad guys”, and “RPL = good guys”), under UARS we could lower no_share to zero, and all the revenue goes to “good guys” and none goes to “bad guys” (and “bad guys” have zero governance weighting). I disagree with that assertion though (the most Ethereum aligned guys – solo stakers – may prefer ETH-Only), so I don’t think that is what we should do. I responded further in my google sheet response to NodeSet, but their response of “coinbase has 90% of validators” doesn’t make sense to me… in the hypothetical example of lowering commission to zero and giving all revenue to staked RPL, why would coinbase continue to run RP validators if they provide no benefit over solo staking (and add more risk) – especially if there are other permissionless options available offering better yields such as Lido CSM?

Our proposal doesn’t claim or attempt to differentiate between centralized entities vs home stakers (since this currently seems impossible). If you meet the capital requirements, you may participate in the system (permissionless). The RPIP49 proposal doesn’t show partiality by subsidizing capital requirements (unlike 1kx proposal through in protocol delegations), so no one has any protocol supported advantages in the ability to grow their validator set (credibly neutral). The fear is that providing protocol supported advantages benefits centralized entities more than a neutral system (mostly due to the sybil resistance problem), and I still haven’t heard a response that addresses that concern.

As I described in my last response to you, the outcome of anyone performing this “attack” is increased validator count, increased rETH rewards, increased buy pressure for RPL, and increased RPL yield for pDAO. If you disagree with my response please say why and we can discuss it

The point was the same “centralized actors” you fear will “take over” the protocol under RPIP49 can still find themselves in the same (eth only) position in the 1kx proposal, but home stakers have less resources/ability to jump through these hoops. So yes: “increased rETH rewards, increased buy pressure for RPL, and increased RPL yield for pDAO”… But this is also accomplished by RPIP49 lowering no_share, which is much simpler and doesn’t include the friction of the 1kx proposal that acts as a barrier to home stakers. So at the end of the day the 1kx proposal seems net worse for centralization on this topic (just one example).

It seems you believe that once the tent is wide open, we will see more home stakers than centralized entities. If so, do you have data to support this hypothesis?

Again… I never said “more home stakers than centralized entities”, I don’t know where you keep getting this idea. We already discussed this over discord: Discord

As knoshua wrote: ‘you keep setting the standard at “guarantee that centralization is not possible" when the argument is that it’s more likely under 1kx’

If you are looking for a design where economies of scale have no impact, then I don’t think that is possible. The question then becomes: is a neutral system better than an opinionated system (through delegations)? My response is: a neutral system is better since any opinionated system I can think of only exacerbates centralization concerns. This is the fundamental question I ended my last response with, and I haven’t heard an answer from 1kx on it yet (that addresses the concerns).

Are you accounting for the fact that the first n validators do not require upfront collateral?

Could you propose an actual “n” value? If “n” is too small, it doesn’t provide much benefit to home stakers (too much friction still). If “n” is too big, the incentive to sock-puppet only grows higher. I don’t think you can find a meaningful “n”, and I’m not sure you’d ever be able to verify it’s effectiveness since it’s impossible to know when sock-puppets start. (back to my main concern).

[NOs setting their own lower commissions] is what will encourage competition between NOs, which helps rETH APY, and allows pDAO to provide incentives for protocol-aligned stakers.

My concern is it leads to a race to the bottom that prices out home stakers and accelerates centralization (subsidized validator set growth as well).

Responding to some other points you had, including your collapsible section

Extreme centralization risk of RPIP-49

Samus: I was surprised to read your post and find such a strong emphasis of “centralization concerns” around rpl.rehab.
Mike: Why? I have been very clear about my fears that UARS will lead to centralisation.

I was just pointing out the gap in time between your initial engagement: (June 17th) https://dao.rocketpool.net/t/tokenomics-rework-update-1-new-explainers/3014/9

And the first “centralization” concern you brought up (July 11thJuly 9th) that quickly became the highest priority for you to discuss by your second post (July 13th)… I appreciate the engagement, but the quick turnaround and emphasis on the topic surprised me so I wanted to focus on that topic in my response.

It seems you are mixing up parts 1 and 2 of our post. Part 1 describes why we believe it is the best direction for RP, and we remain optimistic about this approach. Part 2 of our post describes why we think RPIP-49 would ultimately lead to negative outcomes for RP. We believe RPIP-49 will lead to significant reduction of RPL price and the centralization of the validator set. So yes, we are very pessimistic about RPIP-49

The first 4/7 paragraphs of your original post on this thread focus on pessimism around RPIP49 rework, which is what I quoted in bullet points in my previous reply (I guess you wouldn’t call that “Part 1”?)…

We have already added some research to help quantify the “Will RPIP-49 lead to unstaking” question in the (steel man doc)[Steelman Arguments Against Tokenomics Rework - Google Docs]

I added a response to the doc, including a more thorough analysis of the excel sheet posted… There were some mistakes from 1kx due to outdated information, and generally I disagree with the logic/data presented.

You did not comment further in the thread, so I am not sure if Wander’s explanation of the risks changed your thinking at all here.

Wander posted as a reply in the main thread, and then linked to the other thread you mentioned. I responded briefly first in NodeSet’s (“I haven’t had time yet but I plan to read through the report and respond specifically to where I disagree. But I wanted to go ahead and respond to this since the question was also coming up in the discord…”), but in much more detail in the main thread:
https://dao.rocketpool.net/t/2024-tokenomics-rework-drafts/2847/41?u=samus

The same concern was also raised by other community members around the same time…which have not been addressed.

I included a thorough response to rocknet as well as NodeSet in the sheet I linked above (and never got a response from either parties)

It’s ok for people to not respond further, and it’s ok if you disagree with my responses, but it’s incorrect to say “we haven’t considered them”. One example was that these concerns helped lead to the Express Queue mechanism I proposed, to assist existing NOs and small NOs in ensuring they have the ability to migrate/join even with large ETH-Only NOs coming online and potentially limited rETH demand.

Did you ask yourself the same questions about RPIP-49? The answers also depend on multiple moving variables.

I think there was a misunderstanding with your Q&A…

My point was RPIP49 UARS variables are all Universal: all borrowed ETH revenue pays the same no_share commission, all borrowed ETH revenue pays the same to surplus share, all borrowed ETH revenue pays the same to voter share, all vote eligible RPL earns the same APY, and rETH commission is the sum of these Universal variables.

Under 1kx: every NO can have different commission, delegate_share/extra_reth_share, and recollateralisation_share (it may be activated on some and not others depending on your collateralization). None of these variables universally apply to all NOs. Similarly, under 1kx because RPL rewards are not socialized, the APY your RPL earns depends on the node you stake with (more RPL staked on the same node causes less rewards per RPL since it’s the same pie split among less RPL)

This is what I meant by tons of moving variables at the same time (every node is different, like different stakewise vaults), requiring awareness of pvp dynamics (unlike universal variables where this doesn’t exist).

Please clarify your intent with this statement, because it could be interpreted as a repeated attempt to insinuate our proposal goes against permissionlessness

You had suggested that 1kx proposal allows for “automatic” delegation to home stakers Discord as though there was some kind of “automatic method” for determining home stakers vs centralized actors. You then also suggested why not just “trust the pDAO” to make the best decision on who to delegate to (which subsidizes that delegates validator set growth) Discord

It was that context that I commented I don’t think we should trust anyone to pick winners and losers of who has an easier/harder time growing their validator set (lower cost of capital for the “winners” who don’t need to bring their own RPL). The permissionless ethos I was referencing is where everyone is treated equally just by bringing the same capital requirements, instead of the protocol picking and choosing. I understand with the 1kx proposal you could still “permissionlessly” bring more RPL capital, but different capital requirements for some vs others can lead to a similar effect as a permissioned system like Lido (permissioned entities have smaller capital requirements and can therefore grow their validator sets more quickly than home stakers can). Perhaps “credibly neutral” would have captured the idea better than “permissionless ethos”.

Do you mean to imply that a proposal which attempts to be benefit smaller NOs over large ones is “picking who is right and wrong for Rocket Pool”? You have expressed a desire to “subsidise the little guy”. RPIP-42 does too:

The problem again comes back to you have no way to know who is actually a small NO or who is large entity pretending to be a small NO. There is a limit to how much we can “subsidize” the little guy, since if you try to subsidize too much, then large NOs can just game the system. The bond curve was the best we could do (we take on more MEV risk for the little NO, but they still get to join and earn the same commission just on less borrowed ETH for the first two validators. Importantly, large NOs in this system don’t have any financial incentives to sock puppet). The express queue also benefits existing NOs and small NOs, but the benefit is marginal… There is no difference in capital requirements, just potentially shorter wait times in the queue. With the express queue it is also intentionally limited to 2 tickets (only enough for base validators), so this also helps remove financial incentives to sock puppet.

The 1kx proposal does not have these constraints on sock puppet incentives, so the attempts to “help the little guy” become ineffective at best (little guy blends in with big guy), and detrimental at worst (little guy has greater frictions/barriers while the big guy can more easily jump through hoops).

Questions about RPIP-49

  1. What criteria would you use to decide whether or not to activate the self-defence mechanism to prevent centralised takeover? i.e. how would you identify centralised entities in order to know whether they are >x% of the validator set?

As I described before, we don’t claim (and never have) to identify when centralized entities are >x% of the validator set. We will have to respond to supply/demand dynamics and if rETH demand consistently outpaces supply we may have to lower commission to balance out this dynamic. The surplus revenue could then either go to rETH or to RPL. I have general concerns about MVI compressing margins for solo/home stakers, and similarly if we lower commission too much this also compresses margins for solo/home stakers. But I don’t see how the alternative proposal under 1kx would provide any better solutions (back to the neutrality vs opinionated discussion, where opinionated seems to only amplify centralization concerns from my perspective).

  1. If you were unable…
  1. Do you have any data…

Skipping this question since the premise is misguided.

  1. Apart from the “reduce the no_share” self-defence mechanism, what provisions does RPIP-49 include to help reduce the likelihood of centralised takeover of the validator set?

The express queue should ensure that at least our entire current NOs, and small NOs have access to protocol ETH above: new, large NOs (which may or may not be centralized entities). The best defense we have is by attempting to attract the maximum number of decentralized participants. I think it is also important to compare the RPIP-49 rework to other options and ask:

  • Will RP be more decentralized under any alternative ideas/proposals? (I think the answer is no)
  • Will Ethereum be healthier and more decentralized with RP under RPIP-49? (I think the answer is yes)
  1. As a NO, how long would you remain with the protocol if no_share was set to 0%?

If the remaining 14% of revenue was going to voter share, then I would remain with the protocol indefinitely. (This premise is misguided though).

  1. It seems you think our proposal will harm permissionlessness. Under our proposal, from whom do you think NOs will need to request permission before launching a validator?

You could view “permissionless” as black and white (it is or it isn’t), in which case I would consider your proposal permissionless. If you instead consider it as a spectrum (how high are the requirements, and do they apply equally to all participants):

  • The most “permissioned system” has the highest requirements (meeting KYC requirements, proving institutional grade services/expertise, etc) – to filter out who to give “permission” to, and who to exclude.

  • The most “permissionless system” has the lowest requirements for everyone (Ethereum, Rocket Pool currently and under RPIP49) – everyone has the same capital requirements, if you bring the capital you may participate.

Then I would consider the 1kx proposal closer to “permissioned” than RPIP49 since the protocol may grant “permission” for some delegates to have much lower capital requirements compared to others - maybe those who identify themselves through Gitcoin Passport or other unspecified means (KYC? Interview with NodeSet – you mentioned this idea in the past? Some other “identification” mechanism?). Previously in this post you mentioned Rocket Pool should apply some identification mechanisms, but you didn’t say exactly what those would be (and yes, building a foundation of the base layer protocol on identification mechanisms does seem to go against a permissionless ethos).

  1. Related to my questions above, do you believe there is a difference between “no_share that is profitable based on the NO’s break-even point” and “no_share that Valdorff thinks each group will accept according to their level of Ethereum alignment”? If so, which do you think will be the primary motivating factor for the majority of NOs?

I do not claim to know motivations, or the no_share that the majority of NOs are willing to accept. I do agree with Valdorff that every NO might be different. Solo staking will still be an option after RPIP49 and I expect many NOs to continue to solo stake regardless of boosted commission RP provides, simply for ethos reasons. Similarly, node operators may continue to stake through Rocket Pool for ethos reasons, even if they would be less profitable than under a higher commission. Separately, I think we can listen to the market to broadly understand if we have balanced supply/demand or not.

  1. If UARS were to stabilise in the zone where no_share is profitable for centralised entities but nonprofitable for home stakers, and an increasing portion of the new validators were from centralised entities, what mechanisms in RPIP-49 would you use to attempt to reverse this trend?

Until it is possible to differentiate these actors I don’t see how it is possible to even know when or if this has happened. If it does become possible someday, then we can direct more rewards to the right actors (but until then, I think neutrality is the best we can do).

  1. Imagine a centralised entity launches a significant number of validators and pDAO decides to begin reducing no_share, as you described here. Onchain activity and social media show a large number of home staker types exiting the protocol, but the centralised entity has not left. In this scenario, which of the following do you personally believe would be the best course of action? a) continue reducing no_share, b) begin increasing no_share, or c) leave no_share unchanged

This premise is misguided, so responding within the constraints you put forward doesn’t make sense.

As I wrote in the link you described:

It sounds like the assumption is: “By keeping the RPL requirement, we can prevent centralization since centralized actors won’t buy RPL”
Even if we take that assumption as true for a minute, my response would be… (lower no_share, give to voter_share)

So under the assumptions in that scenario, only “good guys” can have RPL, so the incentives would reward “good guys” and not “bad guys”. However, I disagree with the entire assumption and premise in the first place.

4 Likes