Round 5 - GMC Community Discussion of Submitted Applications

In order to keep the application threads clear of discussions (to make it easier for committee members to read and score them), please use this thread for any and all questions and discussions of round 5 period of grant, bounty, and retrospective award applications.

@atlmapper

  • can you comment on how docit would work after a significant upgrade? I’m guessing it would need quite a lot of work to crawl, refine, optimize, refine more.
  • am I right in assuming documentation must be really solid (up to date, non-contradictory, fairly clear) for good results?
1 Like

Hi all. This is a request for input on how often RPLdefi.com (Rocket Pool DeFi Market Rates) should be updated moving forward.

Right now, Rpldefi.com, is updated 2 times a month. Should this be expanded to 4 times a month or once a week? Here is a poll to find out.

  • Should Rocket Pool DeFi Market Rates be Updated Weekly?
  • I use this info; I would benefit from it being weekly instead of every 2 weeks
  • I use this info; the benefit of weekly updates would be minor to me
  • I don’t use this info; I would if it was weekly
  • I don’t use this info; I don’t think this would cause me to use it
0 voters

Please fill out this poll to let us know if we should update the information more often.

Also, here’s a second part, regarding video content for Rocket Pool DeFi Market rates. Three videos have been created thus far to walk through the market rates and current incentives. Are these of value and could these help onboard more users to Rocket Pool defi opportunities? Please let us know what you think.

  • Are Videos Helpful Content to Add to Rocket Pool DeFi Market Rates Updates?
  • Yes, they are helpful and worth it.
  • No, I don’t use videos, I prefer text.
  • Not sure, I don’t feel strongly either way
0 voters

Thank you all very much!

can you comment on how docit would work after a significant upgrade?

I think what you’re asking is higher level, a specific block of docs, code, convos is tied to v1. When we ship v2 reference a new set of all those pieces.
We can add a new column in our db to create tags to associate groupings of content (code, docs, chat, etc). Then keep calls tied to a specific tag.

Furthermore, we’d have to send you a reference list of all matching urls to help confirm correct associations per upgrade. Overall, doable, with some collaborative work on both teams.

Happy to meet the need, that’s a great question.

Side-note. The bot references most recent timestamp to help handle smaller content updates and send most recent info to users.

@Valdorff

Avado

In general, I don’t believe we should be in the business of either subsidizing unprofitable businesses or giving more profit to profitable businesses. If you go to ava.do, you’ll see solo, RP and Stader right up front. This is the core of their offering.

I would also encourage the GMC to talk with #support regulars. It is my understanding that a number of Avado users have been burned. It’s not clear to me that it’s appreciably easier than smartnode (by easier I don’t mean the median experience so much as “probability of not making a significant mistake”).

One thing that is interesting to me is an alternative client. My current understanding is that Avado is not an alternative and that their open source package is built on top of and fully reliant on RP’s smartnode. If I’m understanding that wrong, I believe an independent client implementation would be worthy of an award.

FxBlox

I have two components here. I’ll start with my thoughts on the project, and then hit price after.

This project is really cool for people that can do without it. Setting up more easily and having metrics accessible is nice. That said: I think a “user base with little to no technical background” is extremely dangerous and should be actively discouraged. What will they do if there’s a critical hotfix for the client they’re using? From the small video included, it seems likely they won’t even know what client they’re using, let alone how to take effective action. See my rant below.

Val's rant about not making RP too easy

Validators are not the average users of ETH, they are infra providers, that need to be able to handle unexpected issues. I (Valdorff) think making it too easy is a negative, not a positive.

I agree creation and management should be easy and smooth - but! You must keep abreast of things. If an update in your Linux distro blows things up unexpectedly, if your execution client releases a critical update between smartnode releases, if your server is fried by a power surge, etc - you must be able to respond. You don’t need to know everything ahead of time, but you need to be willing to learn, spend time on it, and get things working well.

If web3 gets the same use as web2, that’s about 5 billion people. If 50% of ETH is staked and every validator is a separate human (bad assumption), that’s under 2 million NOs. If each one was a totally separate user, that would be 0.04% of ETH users. Realistically, it’s much lower than that as a handful of huge entities account for the vast majority of validators. This is a specialist role. For context, about 0.27% of car drivers are mechanics.

Price. I’m going to be positive and take this as “a tool to make it easier for qualified users to spin up and monitor”. I don’t believe that’s worth $50k. Maybe 10k. And again - part of me thinks this might be a net negative, especially insofar as it’s marketed towards a “user base with little to no technical background”.

Inverter Network

Several thoughts:

  • It sounds quite cool. Essentially, the use case is “donating”. I’m giving up x% APR in staking rewards for <x% APR in a project token.
  • I don’t imagine many users using it; “give away money to a protocol” just doesn’t seem like a large target market.
  • $50k is a lot for a project I don’t see getting traction
  • Most of this behavior doesn’t need a new protocol. Drips, Sablier, LlamaPay all enable sending a number or percentage of tokens per time. Holding rETH and sending a percentage would be darn similar to sending the yield.
    • Here Inverter’s differentiators are
      • KPIs and changing how much users receive of a project token based on them. While I think this is interesting, in the end, it’s a donation. As such, I don’t really see “donate less if you hit KPIs” as driving much behavior.
      • Simplified onboarding vs the examples I gave. If this is the main goal, we should instead commission a small example with a blog post and a video showing how to use an existing protocol to stream donated yield.

So I guess my question here is: Why does the team believe there’s demand? This would be evidence of similar constructs, within Ethereum or even beyond Ethereum.

2 Likes

OKcontract

As often, I’m going to ask “why is this better than existing solutions?”

The RP team has a widget, so I can’t imagine swapping to this one. As for generally usable ones, I believe the 1inch widget is generally available. Both of those solutions can route through both market and our own smart contract.

Cryptoversidad’s work

I believe we’ve had a couple of previously funded education campaigns. Have they been successful? Why or why not? What would this add by comparison?

MERCLE

Why is this better than other quest options, such as Galxe or RabbitHole?

Sharpe Labs

Leveraged rETH can already be done using Aave simplified by Defisaver (among other options). What is the benefit of this option by comparison? This covers the first 2 milestones. What’s the benefit of milestone 3? Insofar as the first two milestones are successful, there should be very little to arbitrage. In that case, this would just be a rETH backed LST, like Gravita and Raft, eg.

Also - my leaning against “subsidizing unprofitable businesses or giving more profit to profitable businesses” applies.

At this point figured I’d just go for completion on opining :stuck_out_tongue:

Tikuna

@Davidlerma would love a response to this comment as I think I lack clarity on a number of things

A bunch of thoughts/qestions:

  • I’d love to hear more about when Tikuna 1.0 has actually proved useful in the past. Has it been mainly useful for the NOs themselves, for overall network health, or is it currently in the domain of “theoretically useful if X”?
  • My current understanding is that a lot of the benefit is to Ethereum, rather than to RP. If that’s right, it seems more like we’re providing a service rather than getting one, so it seems odd that we’d pay for it.
  • The milestones don’t talk about developing Tikuna 2.0, just about integrating it with RP and writing a report. How much cheaper would it be to just integrate Tikuna 1.0?
    • In many cases, we see that the 80/20 rule applies – eg, monitoring at all gives us most of the benefits and improving the tool (to 2.0) is only a marginal improvement. Is there a reason to expect an exception to that heuristic?
    • Rocket Pool runs bog-standard Ethereum validators. Given that there already exists Ethereum validator dashboards, why do we believe there is $46k worth of work (milestones 1 and 2) needed for this?
  • I looked for your report at the links provided and couldn’t find it. A blog post from mid-July indicates that the report was 90% done. What happened there?

@Valdorff Re: your comments on FxBlox

I agree with you that “user base with little to no technical background” is not the right target audience. The realistic target group is “Power Users”, people who are comfortable with maintaining a tech piece. That’s still a huge step up from the current group that can run a Smartnode, it’s fair to say that one currently needs “SysAdmin” level knowledge to be a NO.

Please also note that the demo video is just the proof of concept. The UI will be vastly expanded over the course of development. There will be multiple steps after the user clicks “Add Stack” to show exactly what client is being deployed and other essential settings. The idea though is to have the minimum steps possible and later in the dashboard the user can customize further. The dashboard would be far more comprehensive than this PoC.

Re: disaster management, the mobile app user interface would be an ideal companion, as it can notify the NO on things gone wrong, and they’ll be able to mitigate from a remote location (Libp2p connection). Same goes for updates, notifications and buttons in the app.

Re: your car analogy, mechanics repair things, so they’d be developers in our context? NOs would be akin to drivers, people who can operate the thing but don’t necessarily have understanding of how it works internally. The percentage would be adults who have a driving license / population.

Re: price, the current PoC was developed by 2 team members in 3 weeks. The milestones chart 5 developers working for 3 months. The final product would have 10x more work put to it than the PoC (5 devs * 12 weeks / 2 devs * 3 weeks, collectively working more than one engineering year).

1 Like
  • I’d love to hear more about when Tikuna 1.0 has actually proved useful in the past. Has it been mainly useful for the NOs themselves, for overall network health, or is it currently in the domain of “theoretically useful if X”?

Indeed, the Tikuna project is in an early stage. We have successfully finished a research project funded by the Ethereum Foundation, you can find more details here in our research paper: [https://sakundi.io/wp-content/uploads/2023/10/ISPEC2023_paper_53-copy.pdf], and in our code repo here: [GitHub - sakundi/tikuna: A P2P network security monitoring system for the Ethereum blockchain. 🔐]. The code for the project is open source. With the research project, we have tested our ideas for security monitoring of Blockchain validators, starting with Ethereum. However, Tikuna 1.0 has also proven to be a valuable asset in practical scenarios. It has primarily been used by a few set of Network Operators (NOs) to monitor and maintain the overall health of their validators. NOs have found Tikuna 1.0 especially beneficial in terms of real-time monitoring and receiving notifications in the event of specific types of attacks, which has allowed for swift and informed responses to potential security threats. Its practical utility goes beyond theoretical value and has been a valuable tool for our partners in safeguarding their systems. We want to extend the use of Tikuna by helping the Rocket Pool project and its community.

  • My current understanding is that a lot of the benefit is to Ethereum, rather than to RP. If that’s right, it seems more like we’re providing a service rather than getting one, so it seems odd that we’d pay for it.

Our tool provides security monitoring for Blockchain P2P networks, and it is especially useful for node operators to know the state of their validators. We are starting with those using Ethereum technology. However, since Rocket Pool allows people to stake and operate validator nodes, we think our tool would be very useful for the community. On the other hand, since Tikuna operates at the P2P layer, it’s more concerned with the health of the validators, and it’s agnostic of what kind of tokens run on top of it.

Besides, it’s important to highlight the mutual benefits that can be realized through this partnership:

Enhanced Network Security: While Ethereum benefits from improved network security, Rocket Pool also gains a more secure and resilient infrastructure, safeguarding its operations and assets.

Maintaining Reputation: Rocket Pool’s commitment to security and reliability is essential for its reputation and user trust. Tikuna’s integration helps reinforce this commitment.

Collaboration and Learning: Partnering with a security-focused project like Tikuna provides opportunities for knowledge sharing and collaborative problem-solving, which can be invaluable.

  • The milestones don’t talk about developing Tikuna 2.0, just about integrating it with RP and writing a report. How much cheaper would it be to just integrate Tikuna 1.0?

→ Your observation is indeed accurate. The focus of our current project milestones primarily revolves around integrating Tikuna 2.0 with RP and writing a comprehensive report. Integrating Tikuna 1.0 might be a more cost-effective approach. However, in Tikuna 2.0, we would like to develop more features and capabilities that align more closely with our long-term goals and objectives, including advanced attack detection and improved monitoring. While it may come with additional development costs, we believe Tikuna 2.0’s contributions will prove invaluable in enhancing the security and reliability of the network. Our aim is to provide the most advanced and effective solution for RP users, and we see Tikuna 2.0 as a significant step in that direction.

Tikuna 2.0 will include new features such as:

  • Improved incident detection for validator nodes.

  • Improved dashboard with better usability.

  • Dashboards adapted to the RP environment.

  • Increased number of supported Ethereum clients.

  • More security related metrics.

    • In many cases, we see that the 80/20 rule applies – eg, monitoring at all gives us most of the benefits and improving the tool (to 2.0) is only a marginal improvement. Is there a reason to expect an exception to that heuristic?

→ You bring up a valid point regarding the 80/20 rule, where initial efforts often yield the majority of benefits. While this rule holds true in many cases, we believe that Tikuna 2.0 represents an exception to this heuristic for several reasons:

  • As mentioned before, the Tikuna project is at an early stage; many things can be enhanced. However, we are confident that it can significantly benefit your community while growing in features and incident detection accuracy while interacting with the RP infrastructure. We also want to be involved in the RP community by getting feedback on what they would like us further developing for Tikuna.

  • Evolving Threat Landscape: Blockchain security is a dynamic field, and as new threats and attack vectors emerge, having a more advanced tool like Tikuna 2.0 can be crucial. It offers improved capabilities to detect evolving security challenges effectively.

  • Comprehensive Monitoring: Tikuna 2.0 is designed to provide more comprehensive monitoring and early threat detection. This broader scope can offer greater security assurance, especially in complex and rapidly changing blockchain environments.

  • Long-Term Value: While initial monitoring can offer substantial benefits, the long-term value of Tikuna 2.0 lies in its ability to adapt to emerging threats, improve network security, and empower users with more comprehensive insights.

While the 80/20 rule is a useful heuristic, it’s essential to consider exceptions, especially in fields like blockchain security, where staying ahead of threats and providing robust solutions are paramount. Tikuna 2.0’s development aligns with our commitment to delivering the highest level of security and usability for our users.

  • Rocket Pool runs bog-standard Ethereum validators. Given that there already exists Ethereum validator dashboards, why do we believe there is $46k worth of work (milestones 1 and 2) needed for this?

→ Thank you for your question. While it’s true that Rocket Pool operates Ethereum validators and there are existing Ethereum validator dashboards available, there are several reasons for the budget allocated to milestones 1 and 2:

Usually, the available dashboards for Ethereum and other blockchains include mostly state and performance information about the different components of the network. However, they don’t commonly include security information with alerts when security incidents affecting the network are presented. Tikuna offers that kind of information with data processing by AI methods.

Security Considerations: Security is fundamental in blockchain operations. The development and integration of Tikuna ensure that the monitoring system is aligned with the highest security standards, which may entail additional work and, consequently, budget.

Usability: Beyond basic functionality, the budget also covers improvements in usability, making the system efficient for Rocket Pool’s users.

Our aim is to ensure that Rocket Pool users benefit from a robust and tailored security monitoring solution that aligns with its specific operational needs.

  • I looked for your report at the links provided and couldn’t find it. A blog post from mid-July indicates that the report was 90% done. What happened there?

→ The project was successfully finished, and we presented our final work at the ISPEC 2023 conference. We are pending for its final publication. You can access the full details of our work by reading the submitted paper and taking a look of the code repository:

NOs have found Tikuna 1.0 especially beneficial in terms of real-time monitoring and receiving notifications in the event of specific types of attacks, which has allowed for swift and informed responses to potential security threats. Its practical utility goes beyond theoretical value and has been a valuable tool for our partners in safeguarding their systems.

I would like to understand the claim that this is generating concrete value and not just theoretical value.

Have there been Eclipse attacks on users beyond your test attacks? Have there been other detected non-test attacks? If an attack is detected, are there effective courses of action to mitigate them based on the attack?

Maybe we could get at concrete value with a few numerical questions:

  • Roughly how many user-months has Tikuna run?
  • How many non-test attacks has Tikuna has captured?
  • What percentage of users saw a non-test attack that Tikuna captured?
  • What percentage of attacks were able to be mitigated/addressed due to Tikuna’s real-time notification? What does that look like?
2 Likes

Hey @Valdorff thank you for the clarifying questions!

So to respond one-by-one;

(1)

This is not targeting donations but Dollar Cost Average investments. The mechanism could be explained as simple as this:

Project Z wants to distribute 100,000 of its native tokens to community stakeholders in exchange for 100 ETH collected from the staking rewards and added to their reserve pool.

How does it work?

  1. Project Z integrates the Yield Staker widget to their website.

  2. Project Z deposits 100,000 of its native tokens to the funding vault.

  3. Project Z chooses a KPI for yield generation, e.g. 100 ETH, in order to complete the full distribution of 100,000 staked native tokens over a period of time.

  4. Project Z decides the % of ETH staking rewards to be taken as an “investment” to collect the set 100 ETH while distributing 100,000 of its native tokens. Which is 20% in this case.

  5. Once set, in every 2 weeks, the ETH rewards collected in the reserve pool will trigger the distribution of native token rewards and distribute it to stakeholders to claim the 80% of the ETH staking rewards and 20% of the native token rewards.

From a User perspective:

  1. Bob wants to support a Project Z and be exposed to its token via DCA.

  2. Bob goes to the Project Z’s website.

  3. Bob stakes ETH or rETH directly from the widget.

  4. Bob withdraws ETH rewards and native token rewards of Project Z.

This way, we want to enable a soft LBP event with a dynamic mechanism, an established reserve pool, and conviction rewarding of loyal community stakeholders.

(2)

We already have two interested projects that will be looking forward to integrating this mechanism as a solution after a Proof of Concept at a smaller scale.

(3)

As mentioned in the grant, 20k USD of the grant / out of 50k USD will be used to open an audit bounty using Hats Finance to incentivise open source builders to find bugs in the code. And we believe, 30k USD is a fair amount to ask for such mechanism that we believe will gain a fair amount of traction today only to see more use cases tomorrow with more projects looking to bootstrap their liquidity reserves.

We will be leveraging Sabliers’s streaming module to enable this mechanism, but what we offer, which is detailed above, is not currently possible with simple sending yield or directing yield functionalities. None of those solutions enable bidirectional asset flows based on dynamically set conditions.

Again, thank you for all the questions and clearly stating the differentiators that you initially recognised but I hope this overall detailing helps you better understand the recommended benefits of this mechanism to the RocketPool community.

Thank you!

2 Likes

We have concluded the research project mentioned in Q2 2023 and are currently in the final stages of developing an initial MVP for potential customers. During this phase, we are conducting tests with a select group of initial customers who have found practical value in real-world use cases. While our customer base may be limited at this stage, it’s important to note that our approach is grounded in actual P2P attacks, some of which we have even implemented ourselves [GitHub - sakundi/discv5-testground: Testground plans for discv5.]. Additionally, a prior security audit report conducted by Least Authority on the Ethereum P2P layer highlighted vulnerabilities related to node identity generation and acknowledged the risk of eclipse attacks [https://leastauthority.com/static/publications/LeastAuthority-Node-Discovery-Protocol-Audit-Report.pdf]. As of now, these issues remain unresolved.

Our goal is to collaborate with RP to advance our prototype to a more mature stage that delivers tangible benefits to the RP community and the broader Blockchain community. This approach is based on addressing real-world security concerns rather than being purely theoretical in nature.

At this moment, 2

We record numerous anomalies on a daily basis through the analysis of validator node interactions within the P2P layer. Subsequently, these anomalies may be classified as potential attacks.

Out of the customers who tested our proof-of-concept, two of them observed anomalies in their P2P connections that could potentially be categorized as attacks.

At present, we exclusively offer alerts through email, Slack, or Telegram to the concerned users, typically node operators. However, we have not implemented an automated mitigation process as part of our services.