Rapid Research Reflection

So there’s not going to be any deep introspection or stunning revelations in this post, but I felt I should do some sort of reflection on the Rapid Research Incubator bounty, and also see what others felt about this bounty that completed January 10th.
https://dao.rocketpool.net/t/round-7-gmc-call-for-bounty-applications-deadline-is-december-10/2422/2

Successes:

  1. Speed: This DAO will never match the speed of a centralized entity, but this was very quick. November 19th was the initial call to action; submissions completed Jan 10th; review by Feb 8th. This timeline could likely be shortened further by having standing guidelines/deadlines in place.

  2. Quality of submissions: High quality of submissions; Valdorff is a frequent contributor and we have certain high expectations, but the submissions from community members who had never been awarded by the GMC were tremendous. This could be further improved by better recruiting of candidates.

  3. Cooperation between the RP Team and DAO: I think communication has felt lacking in the last 18 months as the protocol has grown; there is some jurisdictional rivalry and several episodes of waste (in community member time and pDAO money) due to uncertainty about roadmaps and priorities. However, the Team and DAO are extremely aligned in addressing the concerns of impending competition, and the Team has been extremely supportive of this DAO initiative. I hope this is a foreshadowing of a closer strategic ties in the future as the relationship between the two entities matures.

  4. Cost: In terms of cost/effectiveness, this was excellent value. I think the submitters should receive more, but it’s hard to predict in advance the value to the protocol; additionally, I felt it was likely that those who had already made significant progress developing top ranked ideas would continue to develop them and get awarded again as we move towards implementation.

  5. Recruiting new researchers: Money is fine, but knowing people are interested in your ideas is a powerful motivator. Rocket Pool has had a fairly narrow group of names that have been responsible for pushing major changes, and I think having a deeper bench of people innovating and reviewing ideas is incredibly important as the protocol grows- hopefully at some point our lead innovators won’t need to be jacks-of-all-trades, and we will be able to sustain the loss of innovators (to burn out, new jobs, etc) without major interruptions of the protocol’s governance.

  6. Distribution of awards: I think this distribution was close to perfect; large enough pots for the best submissions to encourage quality over quantity, but enough total payments so that everyone walked away with something as compensation for their hard work. I probably would not change this.

  7. Community building: Change has a certain inertia; it takes time for community buy in. By exposing ideas early in development, and allowing more people to become invested in their success, it will likely contribute to a smoother transition from a political perspective than if ideas were presented in a (semi) final form. Also, during downtrends, it is helpful to show concerned low-frequency community members that “something” is being done to make the protocol better.

Needs improvement:

  1. Marketing: Many people, even those who spend a lot of time on trading, did not know this bounty existed; it was largely advertised on the Forum (which has a lower viewership) and in threads (which would only be visible if you had previously seen them). Next time there should be better advertising on main discord channels, and likely direct recruitment of regulars who we suspect have ideas to contribute if asked.

  2. Cost of award review: The review/grading of the submissions is not the point of this exercise, but is necessary to accurately distribute funds and prevent spamming with submissions. This review process took about 50 hours of time from protocol members chosen for their skills and knowledge. It would be better to use some of that time in other ways (maybe plotting the way forward) by simplifying the review process (ie, only 3/5 reviewers grade a given project, or better use of community input, or each reviewer only giving detailed criteria on their 5 top picks). Anything over 25% of funds going towards deciding on distribution of funds seems terminally inefficient to me.

  3. Failure to launch next step: After awards were decided, there is no clear roadmap to get the best ideas from here to implementation. Currently, it is likely that we will employ a retroactive award for the intellectual ferment that is occurring now in the Options Draft thread. I think it may be more efficient to have a milestone type bounty where we pay out for the initial submissions, and then choose a handful (small committee, 3?) of researchers to bring the ideas to the point of DAO temperature check.

  4. Wide scope: By necessity, this was a “we want every type of idea” bounty. I think in the future it could be possible to have a “rapid research bounty of the month”- smaller, more targeted and reliant on a strategic roadmap, so that we could have several areas of research occurring in parallels with different innovators. For this we will need a strategic roadmap, so that the GMC does not have to assume that role.

4 Likes

As you know, I was one of the reviewers, so take this all with a grain of salt.

I don’t think that the utility of the review and grading process ends at ‘accurately distribute funds and prevent spam.’ I honestly see that as more of a side-effect of the actual utility, which is taking 20+ submissions and ordering them roughly by ‘worth-more-time-ness.’ This increases positive impact in a couple of ways:

  • It guides those community members with little time to the best and most-likely to proceed entries, which makes that ‘Community Building’ point more effective.
  • It can focus next step efforts on the most promising entries.

Basically, at some point you need to order the entries and present a more manageable number if you expect the wider community to engage with them. Further, if you don’t do this well at the review stage, it will naturally happen by those working on next steps, only the output won’t be as public or easy to engage with.

TL;DR: Even 25% of funds for a purely administrative process would be very inefficient imo (for clarity, I think the review cost was ~37% of the total here). However, I don’t think that the review stage was a purely administrative process. I think it gives efficiency gains for both next steps and community engagement.


I’m in general agreement with your other points, and as I’ve said before, I think it should become a semi-regular exercise. I do think that to some extent the speed impacted the marketing, but I also want to highlight that this took place over the holiday period, and that tends to harm engagement.

1 Like

This, in general, is a weak point in Rocket Pool (and has some tie in to Successes: #3). There might not be a great solution, but as we move into a more pDAO governed protocol, it would be helpful to have a non-core team mod for the discord.

I know this is a sensitive subject, but for something as potentially huge as these protocol improvements, being able to ping the discord to promote this effort, as well as, to make people aware, would have been very helpful.

1 Like

So no grains of salt needed. I agree with everything you said. I was going for brevity above, which is dangerous, so I’ll be a little more clear here:

Summary

I think there are two roles for the reviewers: the first is grading for award distribution. The second is assisting the transition from point A (raw ideas) to point B (implementation) to keep the momentum going- I’ll call this slingshotting.

I would argue grading is a primarily administrative role; it doesn’t really add value after research has been submitted; the threat of grading prevents low quality submissions and encourages high quality submissions for future research so it has to be done with some degree of fidelity. Slingshotting has a hugely value additive role.

It was fairly evident after awards were announced that the criteria for distributing awards are different from the criteria for slingshotting the ideas into implementation; ranking significantly helps as you pointed out, because there there is definitely overlap, but p-value like 0.1-ish let’s say. An idea awarded 1st vs 3rd or 5th vs 6th has little practical meaning regarding whether it should move forward. Finishing at the grading portion may cause people to take the wrong message (ie, a good idea with a bad implementation that could succeed in other ways, a submission which has lowish impact but also is extremely easy to implement, or a submission that is overall deemed unlikely to succeed but has portions that can be utilized with other proposals might be examples). We as the DAO have also flailed trying to figure out how to transition from these ideas and highly rated awards to implementation- valdorff and samus are taking the lead on this in a very structureless way.

I think that in the future, it would be optimal to task our reviewers with (somewhat) less time grading, and (somewhat) more time spent on slingshotting which would be commensurate with the skills we chose them for. I think we can do this by having the community filter some submissions, or have a simplified rubric for grading, or a flatter award structure (top 5 entries get more equitable awards so argument over 2nd vs 4th is moot), or focusing the contemporaneous discussion on future plans rather than ranking.

The comparison I think of is a university professor. Weekly, she spends 5 hours teaching, 5 hours on office hours, 10 hours on research, 10 hours preparing her courses, and 10 hours reviewing papers/grading exams (she’s part of a union). If you were able to decrease the amount of time she spent grading to 5 hours (let’s say through automation, use of TAs, changes in the types of assignments, etc), you will probably have downstream negative effects (students cutting corners, etc), but these would likely be more than compensated by increasing the time spent preparing for lectures, research, office hours etc, all of which add actual value for her students. There will be still be some optimal level of this administrative work, below which you have more negative than positive effects, but that level can be decreased by adding efficiencies rather than just cutting hours.

1 Like

Great typeup! I appreciated the viewpoints and reflections. I agree that this is a great opportunity to build community, get people to discuss ideas, and do development work at a great cost-efficient price point.

Speed

It was certainly better than it could have been, but I feel more could be done here. There was a lot of work that could have been done in advance (Who is going to review it? What is the criteria? How will results be communicated?) that instead waited until it was the bottleneck before it was worked on. This is not to lay blame on anyone - just that we are learning how to move as a DAO and that I think there are some opportunities for improvement here.

Ranking of Ideas

One thing that I think could have been done better is not conflating soliciting new ideas (which should prioritize innovativeness and usefulness) with ranking which should be implemented (which should ignore innovativeness). We can incentivize out-of-the-box thinking, innovativeness, fleshing out of ideas, and looking into implications - but these are subtly different than ranking ideas that would help the protocol.

As a result of the above, the distribution of rewards, to me, too heavily favored people who submitted established concepts that were already known and thus we didn’t need to pay to obtain. I think @epineph submitting established ideas and not taking payment went a long ways towards fixing this (thank you), but we could do it from the design in the future.

Closing Thoughts

I personally don’t think that spending money on admin and review is bad, but I don’t have strong opinions on the exact percentage.

Lastly, there were lots of little bits of work that people are stepping up to that I’m thankful for. @LongForWisdom 's summary is helpful and necessary. @ShfRyn keeping people updated on what the stage of the project is and what is being done also deserves praise.

4 Likes