This quote is from a recent discord post I’ve made which was edited into the article linked above.
There has recently been a great discussion on discord about identity and governance. I take the stance that identity disclosure is a useful security tool in keeping our governance safe from malicious actors, but this is, understandably, a very touchy subject in crypto.
The article linked above provides a lot of background on my views, so I’ll jump straight into my suggested addition to RPIP-10 which I’ve made this thread to discuss:
At least 51% of any multi-sig participants MUST reveal their identity to a known-trusted person
(i.e. the dev team right now, existing non-anon committee members in the future)
Note that this is less than what you would expect from an employee, for example. No organization would allow its board to be made up of mostly anonymous members. Ideally, management committee members should only enact community will anyway, so anon opinions won’t be ignored, and there is still room in this suggestion for anonymous committee members.
Given that the vote on RPIP-10 is ongoing, I think it’s best that we discuss this as a potential future update. Likewise, the ship has sailed on the suggested IMC members at this point, and I have currently voted in favor of the proposed lineup, as I believe the chances of issues with this group are very low.
That said, there has been some discussion of a greater governance structure which takes ownership of the pDAO guardianship from the dev team. pDAO guardianship is a very serious responsibility with heavy control over the protocol, and I believe anonymity to be a security concern. That said, there has been a robust discussion on discord with several well-articulated and reasonable dissenting opinions starting here: Discord
I disagree with this proposal, though I agree with the idea that an exposed identity can be one factor in trusting an individual.
My preference is for something much more organic, where the community can make up its mind about nominees based on many factors as they see fit. This may include:
Measurable metrics: how active they are (discord, forum), how much they contribute in various ways (git history, tweets), how long they’ve been involved (account ages), etc
Soft metrics: wisdom, maturity, alignment with the protocol, alignment with one’s own views, etc.
I think identity spans a spectrum and can flow from soft (eg, a multi-year old twitter account), to hard (KYC checking federally issued passport), to various points in between (have personally met them; someone you trust has personally met them and is friends with their dentist; they’ve been on public video calls and have a matching picture on a company website of a known-real company).
I don’t think it makes sense to codify how each factor should be weighed against the others. How many git contributions equal a month of account age? Instead, I think we should make information as readily available as possible, and let the community decide who they prefer to trust based on that information.
What I would tweak going forward
Ideally, we won’t be under time pressure when picking committees. In that case, I would love to:
Call for nominees and get a too-large slate
Get some kind of forum-poll to narrow the field (I might actually suggest folks vote to exclude rather than vote to include? I have to think more on this).
After we pick a slate using a method like that, then have the snapshot to ratify.
For the IMC, we’ve taken a shortcut so we can get rolling quickly. Fwiw, if we get a couple/few weeks without governance at some point, I’d be happy to redo the IMC membership with the “longer” version.
This certainly makes sense to me as well. I don’t think this is mutually exclusive on its own.
To be clear, I’m suggesting this as a specific security measure to defend against certain types of attacks. If the community decides this proposed mitigation isn’t worth its cost, so be it. I think this is a fairly good compromise to mitigate something that would be really really bad, though – management committees can do some serious damage even if they’re just dysfunctional and not malicious.
Hopefully, as you said, our processes will continue to improve over time. I’d prefer to see a little more care with future MC elections.
Yes, by this logic, maximum resilience means everyone discloses their identity. I’m sympathetic to this, but given that there was a lot of resistance when I brought this up, this proposal represents what I hope is a reasonable compromise.
Like all good compromises, so far no one seems happy about it
We don’t necessarily need all identities to be disclosed. We could simply say:
If X amount of signatures are required for the multisig, at least X+Y identities need to be public.
Y could be any number we define. The larger it is, the more security there is. So for minimum security I would propose Y to be 1. Then if one signature is lost, the other known identities can still act while we resolve the issue.
As mentioned in the discord discussion, I don’t think this is worth implementing as a hard requirement. I think it discourages legitimate and beneficial participants while not really providing much of a deterrent for a committed bad actor; since the verification method is just an interview with doxed trusted protocol members, actors could be hired or friends could be used to imitate unique people behind the anon accounts.
I also don’t like that it enshrines the need for trusted, central members in the protocol. In the long-term, though certainly a more difficult path, I’d much rather see the pDAO budget management transition into something trust-minimized (like a whitelist-limited system).
I think something akin to @Valdorff’s suggestion where public info and metrics are presented to the community, but also potential members can present additional info that they are comfortable with (such as what @rhettshipp did due to his limited community interaction), is a more viable path forward right now.
(Also note that I am a proposed member to the first IMC and thus I do have a conflict of interest with this proposal. However, I am also willing to be part of the 51% to private-dox themselves if this gains traction and passes as policy)
I think @DaserDog is bringing up a key point - knowing someone is a separate video face in no way proves they are not, in fact, working with another member.
To me, much stronger evidence can come from other things. Are they passionate about things? What things? Do they show they care about other community members? Is their account started at a random age (vs 5 accounts created the same day all apllying to be in a committee)? Have they been using this tag a long time for other things? Effectively: how many hours of work do we see in this persona?
Convincingly faking many factors is hard (and thus, expensive). Much harder, imo, than doing a video call with a different face.
Hiring actors and creating a convincing history and corresponding online presence for them is a much higher bar than simply making sock puppet accounts, so I think it’s fair to say that this or a similar measure would still increase security. I’ve used this analogy before, but I’ll repeat it here: we still lock our doors despite the fact that someone can just break in through a window instead.
I’m surprised there’s been so much resistance to this suggestion given the extremely low level of effort needed for astroturfing and sock puppets. Most popular subreddits have an extremely high rate of fake accounts, and the only incentive for that is advertising or propaganda. Here we’re talking about a wide range of potential attacks on the fidelity of RP governance. With no restrictions, we’re open to a lot of forms of degradation in decision making and enactment. The possibilities range from “me and my 3 sock puppets are on this committee so I only need 2 more votes to get my way” to “me and my 5 sock puppets completely control this committee, so I’m going to manufacture a reason to save up some cash and then drain the wallet”.
Certainly, these scenarios are still possible if we do minimal identity verification for some members, but it does make the bar for these attacks much harder to clear.
I disagree. Making an account that looks like mine – intelligent debate on a number of topics across discord and the forum, DM conversations with many individual community members, git contributions, real-time speed reactions, etc – is a much higher bar than asking your friend if they’ll sit for a video call if you pay them. I would say a similar idea applies to most of the nominees for the IMC (and I’d hope for future nominees).
A fake account is trivial. A fake account that has built significant reputation in the community is not.
Making an account that looks like mine… is a much higher bar than asking your friend if they’ll sit for a video call if you pay them
Agreed! Maybe this is what the proposal is missing – some sort of explicit reference to a simple background check. My expectation is that the known-trusted person who does the interview also has their reputation at stake by making their assessment public. Something as simple as “we did basic due diligence and this person appears to be who they told us they were.”
How about this revision?
At least 51% of any candidate multi-sig participants MUST reveal their identity to a known-trusted person, who SHALL issue a public statement on the candidate's suitability
(we could also edit the above to use the X+Y approach @Darkmessage proposed)
Ultimately, I think it’s pretty reasonable that at least some potential MC candidates have someone trustworthy vouch for them. As RPIP-10 is now, we can have fully anonymous committees, which introduces security concerns.
I don’t understand what you’re agreeing with. If you’re agreeing that faking my anon account is more expensive than “revealing an identity”, then I don’t understand why the latter should be elevated above the former.
There was some more discussion on the discord starting here: Discord
The short version of my reply to @Valdorff is that I agree that identity should not be elevated above other requirements. Instead, my view is that we should have 1) reporting requirements for candidate votes and 2) basic minimum requirements for the MCs as a whole.
Given this, here’s an example of potential additions to RPIP-10:
Multi-sig participants MAY be anonymous, but the total number of anonymous members on a single multi-sig MUST NOT exceed 49%
All candidates MUST demonstrate minimal engagement with the Rocket Pool community for at least three months prior to a vote on their candidacy
Candidates MUST provide a report on their suitability for candidacy which SHALL include 1) length of participation in the Rocket Pool project, 2) a summary of contributions to the project thus far, and 3) an indication of identity disclosure or lack thereof.
Note that the first point is slightly different than what I originally proposed.
People should be free to use whatever factors they want in determining a candidates suitability including identity or favourite colour and candidates should be free to include such information if they believe it will be helpful.
However any language that asserts identity as something of importance or establishes revealing ones identity as a default expectation is incompatible with my view of decentralised governance and I will vote against it’s addition.