This post contains a post-mortem analysis of the issue in v1.6.1 and below that caused high CPU load after Redstone’s first rewards interval on September 1st, 2022.
It also contains an explanation of the short-term solution employed in v1.6.2, and an exploration of a few candidate options for a more robust medium-term solution.
Posts like this one generally go into the rocketpool-research repository.
However, because this contains a potential redesign with long-term ramifications, I would like to collect everyone’s feedback on it in the same way that I did with the Smoothing Pool design several months ago as that dialog proved to be very helpful.
To understand the problem and the solution, I’ll first briefly cover the EVM, events, and event logs provided by Execution clients.
When a Smart contract wants to access data, it can only access the data that is stored on the Blockchain itself at the time of the transaction triggering the contract’s behavior.
For example, the Rocket Pool contract for creating a new minipool can check if you have enough RPL staked to create that new minipool because that information is recorded on-chain when you stake RPL.
What a contract can’t do is access historical states.
It can’t look back into the past - it can only see the present.
Storing things on-chain is generally expensive, and it isn’t done unless necessary for the functionality of the contracts.
Things like your cumulative RPL earned aren’t stored on-chain for this very reason.
Thus, there’s no direct call you can make to the contracts to ask “how much RPL have I earned since I started running the node?”.
Using an Archive Execution Client to regenerate past states and ask them is prohibitively expensive for such an operation:
Archive ECs are generally not viable for home stakers to run and maintain because of their massive storage requirements and sync times
Accessing old states can take a long time if they’re stored on large, but slow, spindle HDDs instead of fast SSDs
This is where events and logs come in.
Smart contracts have a the ability to emit special messages called events.
Events contain well-defined, well-structured data that get stored (along with some metadata) by the Execution client.
They can be accessed at any time off-chain by querying the Execution client’s RPC route, but they are not accessible via the EVM (and thus, not accessible by smart contracts).
Events are used by smart contract developers for logging and debugging, and also to provide a way for light clients and third-party applications to query information about what happened on chain.
For example, the pre-Redstone
RocketRewardsPool contract emitted an event called
RPLTokensClaimed every time a Node Operator claimed monthly RPL rewards.
When emitted, this event logged:
The address of the node claiming the rewards
How much RPL was claimed
The time of the claim
You can actually see all of these events on Etherscan if you’re curious.
The Smartnode uses these events for several important tasks.
In addition to scanning them to calculate your cumulative total earned RPL rewards, the Oracle DAO also uses them to crawl validator deposits to the Beacon Deposit contract when checking for the withdrawal credentials exploit (the “scrub check”).
Events are effectively ways to record data about things that happen on the blockchain without needing to store that data on the chain itself, as long as the users of that data are off-chain and can query the Execution client’s RPC endpoint.
This makes them a cheap (emitting an event costs a trivial amount of gas compared to storing all of its data on-chain), reliable, and easily accessible way to extract data from the chain.
That being said, looking up and filtering through events can be computationally expensive.
Execution clients all use a data structure known as a Bloom filter to provide quick access to event logs.
This is actually part of the Ethereum standard itself; each block has a Bloom field for its logs specifically to make them efficient to filter.
While it’s generally quick, it has its limitations.
These limitations were hit with the new Redstone rewards system.
With the new Redstone rewards system, at each rewards interval, the Oracle DAO generates an artifact known as a Merkle Tree.
Without going into too much detail, this file essentially snapshots and records the amounts of RPL earned from collateral rewards and the ETH earned from the Smoothing Pool by each node operator for that interval.
This data is stored off-chain, so the contracts themselves don’t actually know how much RPL or ETH you earned for a given interval.
You have to tell them how much you earned when you claim your rewards.
Luckily, Merkle Trees work in a clever way that makes it very easy and efficient for contracts to verify the amount you are trying to claim is correct, even though it doesn’t know how much you can claim.
With that context out of the way, the new claim system needs to know the following things in order to claim rewards for an interval:
The amount of RPL being claimed
The amount of ETH being claimed
The “Merkle proof”, which is a series of hashes that combine with the above to verify the amount being claimed is correct
Feel free to take a look if you’re curious about these artifacts.
Hopefully that background context makes it clear that in order to know how many rewards a user has earned for a given interval, they cannot ask the contracts directly as they could with the previous rewards system; they need to have the JSON file produced by the Oracle DAO for that interval.
Doing this means they need to know where the file is hosted; since the Oracle DAO hosts these files on IPFS, and files on IPFS are addressable by their CID (their hash), that means each node needs to know the hash of the file in order to retrieve it.
When the Oracle DAO reaches consensus on a Merkle Tree (which they all generate independently), the last member to vote on the that tree triggers it to be canonized as the official tree for that interval.
When it does this, it doesn’t store the CID on-chain.
Instead, to save on gas, it emits an event with the CID for the JSON file on IPFS.
For Smartnode operators, that means the node needs to look for this event when it notices a new rewards interval has begun.
At a high level, v1.6.1 of the Smartnode was designed like this:
Check the index of the current rewards period (0 for the first one, 1 for the second, etc.) which is on-chain
Check which intervals you’ve claimed rewards for already, which is on-chain
If you haven’t claimed for any intervals prior to the current one, make sure you have the rewards files for them
If you don’t have them (and you’re in “Download” mode), get the event emitted when the Oracle DAO submitted the interval which contains the URL of the rewards tree file on IPFS (which is off-chain)
The last step is the cause of the high CPU issue.
The Smartnode needs to look through the event logs of the new
RocketRewardsPool contract to find the event the Oracle DAO emitted when it canonized the tree for that interval, as that event contains the CID needed to download the correct tree from IPFS.
Unfortunately, the Smartnode doesn’t know when to start looking for the new tree (as the block Redstone was deployed on is not recorded on-chain), so it defaults to a “safe” well-known value: the block that the Rocket Pool protocol itself was deployed to the chain, which is recorded on-chain.
For reference, on Mainnet, this was block 13,325,229.
It has almost been one year since then, and as of this writing, Mainnet is currently on block 15,523,175.
That means scanning for the first rewards interval even needs to go through over 2 million blocks to find it.
As clever and efficient as the Bloom filter is, this sheer amount of work - combined with the event log searching the Smartnode was already doing to calculate and display your cumulative RPL rewards earned on the Grafana dashboard - was too much for most Execution clients.
This information was being queried every 5 minutes (the default update interval for Grafana), and because it took longer than 5 minutes to calculate on most systems, the Execution client would suddenly be tasked with both the first round of the calculation and a new second round of that same calculation because the first one wasn’t done yet.
This caused a cascade of event log queries that brought the Execution client to its knees until the metrics gathering loop was stopped; hence why it was fixed by shutting down
rocketpool_node, which is the process that runs the metrics gathering loop.
Unfortunately, this process is responsible for other key things, so this was only a temporary alleviation until Smartnode v1.6.2 was released, which contained a workaround for this problem.
Smartnode v1.6.2 included the following changes as a short-term mitigation to this issue:
Disabled calculation of legacy RPL rewards during Grafana’s metrics loop and the
rocketpool node rewardscommand
Modified the way the Smartnode looks for Redstone rewards events (see below)
The Smartnode will (temporarily) hard code the block numbers where rewards events were emitted once the Oracle DAO has canonized the tree for an interval.
This way, it won’t have to search for these events; it already knows exactly where they are.
For new rewards intervals where the block isn’t hard-coded, it simply targets the block one rewards interval ahead of the last known hard-coded interval and searches a window of 20,000 blocks centered around this point.
If it can’t find the event there (because, for example, someone hasn’t updated the Smartnode in several months so there are several “unknown” rewards intervals), it will jump ahead another rewards interval and try again.
It will keep doing this until it reaches the head of the chain, at which point it will return an error.
This is a quick-and-dirty, but successful, way of finding the latest event with one important caveat: it only works if the rewards interval stays the same.
As soon as the interval changes, multiple past intervals can no longer be reliably retrieved without hard coding the block, so past Smartnodes aren’t guaranteed to work if the user doesn’t have the latest files already downloaded.
Downloading rewards in this case will require a Smartnode update (which has the specific blocks for each previous event hard-coded).
The most reliable thing to do, bar none, is to store a map of
interval -> rewards file CID as an array directly on-chain.
Kane and I have already explored this idea, and we believe it should be added into the Atlas update (predicated on DAO vote approval).
Once the data is on-chain, this entire problem with event scanning goes away.
This is a long-term solution though. Until then, we should investigate more robust fixes that can reliably weather a rewards interval change.
The first option is to simply keep the system as-is until Atlas is released.
While there is no date for Atlas’s Mainnet release (and indeed, it is still very much in development), one could argue that there will only be a handful of rewards intervals between now and then and it simply isn’t worth spending development time providing a more robust fix until then.
It will require users to regularly update their Smartnode in order to capture any hard-coded rewards intervals, but one could also argue that node operators should be doing this anyway.
The main downside to this is that legacy cumulative RPL rewards will remain disabled until Atlas.
The Smartnode has thus far been designed to be as stateless as possible.
It doesn’t record any information to the filesystem about the state of your node, its validators, or its activity; it procures all of this from the Execution client on-demand.
This way it always knows it has the correct data.
This was true before I was hired by the team, when Jake was still in charge of its architecture, and I’ve tried to stick to that paradigm as best as I can.
This might be a rare situation where we can break that rule, and record some data (particularly about cosmetic things that don’t affect actual node operation) off-chain on the node’s local filesystem.
The idea is that it can essentially “cache” a few things by calculating them once and then saving them so that it doesn’t have to look them up via regular on-chain scanning.
Importantly, those events will always be there in case the user needs to reconstruct or verify the cached data.
One candidate design for such a system would look something like this:
node-statefile using YAML or JSON which will store cached data for the node.
current-cached-blockparameter to this file. This will store the latest block for which the node has processed, and cached, relevant data. Start this at the Rocket Pool deploy block (13,325,229 on Mainnet).
legacy-rpl-rewardsparameter to this file. This will store the cumulative RPL rewards earned pre-Redstone, for display purposes.
Add the Redstone deployment block as a hard-coded parameter to the Smartnode.
current-cached-blockis below the Redstone deployment block:
Crawl the event logs for the old pre-Redstone
RocketRewardsPoolcontract as a background process.
Sum all of the RPL claimed events to determine the cumulative pre-Redstone RPL rewards.
current-cached-blockwith the block number of each event, so it can resume if it gets interrupted later.
During the routine 5-minute update loop of
current-cached-blockis greater than or equal to the Redstone deployment block. If not, ignore the following behavior.
Check for unclaimed intervals. If any exist, and we do not have the rewards files for them:
Crawl the event logs of the new (post-Redstone)
RocketRewardsPoolcontract as a background process.
Look for the next rewards submission event (the first one that has not been downloaded yet).
When found, update
current-cached-blockto that block number.
Use the CID in the event to download the file. If it fails, let the logic run during the next cycle - it will try again immediately since
current-cached-blockalready contains the block number for the missing interval.
Continue until all rewards interval files have been downloaded.
In theory, other things could be added to this state / cache file as well if the community has suggestions for things that would reduce the metrics-querying load on the EC and BN while maintaining data resilience.
The third option is fairly easy in terms of implementation and CPU load.
It is effectively what we have now, but instead of jumping ahead and aiming at specific “windows” to search for event logs of each missing rewards interval, it just traverses the logs starting at the last known hard-coded block number and continuing until the event is found (or the head of the chain is reached).
This would cause some initial CPU load while it performed the initial traversal, but it would end once it found the relevant event and downloaded the missing file.
If the download fails, it would repeat this work (since it is stateless and doesn’t store the block at which it previously found the missing interval’s event), but this would only be a problem if it constantly fails to download the file which is indicative of other problems anyway.
Note that this wouldn’t provide a way to retrieve legacy RPL rewards.
If you have an idea for how to solve this problem beyond the solutions above, feel free to include it in the comments here and we can all riff on it together.
Hopefully I’ve provided enough context here for you to understand the problem, the short-term fix, and the options for a longer-term fix until we can resolve it directly in the contracts.
Thanks for taking the time to read through this, and I look forward to hearing feedback from everybody!