Grants/Bounties Scoring Rubric

I don’t want to count any RPIP chickens before they hatch (especially as we’re not at quorum yet), but RPIP-15 looks like it is likely to pass. One part of the RPIP specifies that the GMC shall develop a scoring rubric for grants/bounties/retrospective awards. It may be one rubric or two different ones or three different ones. The RPIP specifies that the GMC must use these rubrics to score all proposals, but it doesn’t say that the scoring must be determinative in the handing out of awards. That was intentional, as even the best rubric will not capture everything that needs to go into those decisions.

The RPIP specifies that the community should be involved in the development of the rubric. Given how long it will take from today until we have a functioning GMC (RPIP has to pass, then the nomination period, then voting on nominees), I thought we could get the rubric discussion started now. I don’t think we have to be too specific in the design of a rubric. Here is a general set to get us started, though I’m also okay if we end up going in a completely different direction.

Categories

Impact (Up to 60 Points): If successful, how much of an impact will this have on the protocol?

Return on Investment (Up to 20 Points): Relative to the size of the budget ask, how much do we believe this will deliver for the protocol? In other words, recognizing that funding is limited, how much would this grant benefit the protocol relative to the amount being asked for?

Feasibility (Up to 20 Points): Given the provided information in the application, how likely is it that the individual or group submitting the application can accomplish this goal?

I think we could use this for Grants and for Bounties (though Bounties would probably need the weights changed a bit). It doesn’t quite work as well for Retrospective Awards, but I feel like those are mostly a question of impact above all else.

4 Likes

Thanks for starting this! Although moon shots are useful at times, I actually think feasibility should be most important metric to protect our investment. For instance, I could submit a grant for 10 RPL to make a personal call to Joe Biden to make rETH a coequal currency to the US dollar: this would get an 80 on the scale, but obviously would never happen. Besides jiggering with the relative strengths, another way to do prevent this would be to make the rubric multiplicative rather than additive, so that a poor score in any single section would essentially be disqualifying.

3 Likes

I was also thinking a multiplicative table would work a little better for this. Feasibility and impact are good starting points, ROI is a reasonable metric but for some projects might be extremely hard to quantify. I’ll have to think on that one and see if I can come up with a better option or phrasing. Thanks for getting this started Calurduran!

Very nice. Definitely needs to be multiplicative as others have stated. Also, I agree with @epineph, impact is very subjective and difficult to predict. In fact, people are kind of notoriously bad at self-judging the impact of their decisions.

Some sort of payout system where some grant money is delivered before the project is delivered and some after would allow for impact to be more effectively and independently determined.

1 Like

Multiplicative table, with impact 1-10, ROI 1-5, feasibility 1-5, with GMC to determine score for each category based on grants/bounties application. If the applicant disagrees with the scores, they can submit evidence/information in response of why the grant/bounty deserves a different score in any category.

I’ve never experienced a multiplicative rubric, but I’m open to trying it out. I think the rubric will also evolve as the committee uses it, and I would see it as one of the responsibilities of the committee to marshal that process.

I don’t think we need a path for the applicant to officially disagree. We have a lot of dispute resolution built into the RPIP. If the applicant disagrees with a score then they can modify their application during the subsequent application round.

1 Like

Bumping for visibility since this is about to be super relevant to everyone.

Rubrik thoughts:

  • first things first: what is/are our target(s), and thus what does the score represent? Examples:
    • stimulate node operators
    • stimulate rETH usage
    • pump RPL bags
    • get joe to sleep more
  • Grants should target one of our goals, and deliver measurable progress:
    • the grant supports one of our goals (see bullet 1)
    • the grant has a high likelihood of being completed
    • the price vs impact of this grant is good
  • Inspired by the IETF’s “rough consensus” ideas (see RFC72820) I think primary considerations should be:
    • Lack of disagreement is more important than agreement (ie there is nobody who “can’t live with this”)
    • Consensus is the path, not the destination (ie the point of consensus is a good solution, consensus itself is not a goal)

I’d like to stress that since many of us are techies, we immediately gravitate towards “what should the formula be”, but that should always be subordinate to the goals of the formula. If we can clearly define the goal of the rubric then making a scoring ladder is more or less trivial.

1 Like