# 1 Introduction

As part of the data visualization subdivision of Ethereum Foundation community staking grantees, our research seeks to provide graphical insights into the function and health of the network. Over the last few months, we have worked to update our analysis on the performance of validator nodes which was originally performed on the Medalla testnet. In this article, we will provide an update on the progress, highlighting the work we’ve done both in terms of the data infrastructure and analysis. Specifically, the four major updates we’ve performed are as follows:

1. Developed a robust data backend based on chaind for collecting and storing validator data.
2. Analyzed the performance of validators across key metrics and visualizations, comparing current performance to our previous findings from the Medalla testnet.
3. Derived a new tier and scoring process to rank the validators’ behaviors.
4. Deployed a real-time health dashboard that aggregates attestation and behavioral statistics from across the network.

We will cover each of these updates in this post.

# 2 Data Backend

This section covers the steps we took to configure a database backend for the validator analysis.

## 2.1 Overview

The first major update was to create a robust and scalable data backend. For the Medalla testnet, we collected the data ad-hoc in a data scraping procedure that extracted information from Beaconscan. For this update, we’ve built upon chaind to pull data directly from the Ethereum blockchain into a structured PostgreSQL database.

## 2.2 Technical Steps

For the technically curious, these are the exact steps we took to configure the above setup on our Ubuntu 20.04 analytics server.

1. Install a Beacon Node
1. Supported Beacon Nodes are Teku, Prysm, and Lighthouse. For performance reasons, Teku is recommended.
2. Full Teku installation instructions available at: https://docs.teku.consensys.net/en/latest/HowTo/Get-Started/Installation-Options/Install-Binaries/
3. Latest binary release available at: https://github.com/ConsenSys/teku/releases
5. Unzip the archive
2. Run a Beacon Node
1. cd into the Teku folder (at the time of writing, the latest release is 21.3.2. So the command would be cd teku-21.3.2.
2. cd into the bin folder
3. Execute Teku in Data Storage Archive mode, with the REST API enabled: teku –rest-api-enabled –data-storage-mode=archive
4. Teku will begin to sync the blocks and associated information from the Ethereum 2 Mainnet to the local machine. This process could take several hours. The current slot will indicate the most recent slot for Ethereum 2, while the head slot will indicated what the most recent slot synced is. For example, A sync that is about 5000 slots behind display logging along the lines of: 06:47:11.394 INFO - Sync Event *** Current slot: 922134, Head slot: 917151, Connected peers: 22
5. Once, fully synced, the logs should show something along the lines of 06:53:51.163 INFO - Slot Event *** Slot: 922167, Block: fd9e45..5701, Epoch: 28817, Finalized checkpoint: 28815, Finalized root: 522dc6..a6b7, Peers: 42 . This is the indication that all blocks have been synced, and we can proceed by installing and running chaind to populate our PostgreSQL database.
3. Install PostgresSQL and configure an empty database
1. The following instructions assume the use of Ubuntu 20.04. Full documentation on this and other platforms is available at: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-20-04
2. Install the Postgres binary: sudo apt install postgresql postgresql-contrib
3. Create a role for chaind: sudo -u postgres createuser –interactive Follow the prompts, naming the new user chaind
4. Create a database for chaind: sudo -u postgres createdb chaind
5. Login to the new database: sudo -u chaind psql
4. Install chaind
1. The following instructions use the Go Source installation method. Full installation instructions are available at: https://github.com/wealdtech/chaind#source
2. Install Go: sudo apt install golang-go
5. Run chaind
1. Change directory to the location of the chaind binary: cd ~/go/bin
2. Execute the binary, providing both the Teku REST API url and the Postgres connection URL: chaind –chaindb.url=postgres://chaind::5432 –eth2client.address=localhost:5051
3. The database synchronization will now take place. This could take several more hours.
6. Check the database sync progress
1. Access the database: sudo -u chaind psql
3. The latest slot and block will be displayed, which can be compared with the Ethereum Mainnet to see the progress of the database sync

In Figure 2.1 our Teku Beacon node is catching up to the head slot. When it is complete, the data is available on the server for chaind to begin synchronization to the database.

The tables in this database now contain the data needed to recreate our validator analysis. With some minor manipulation and joins of the raw data, we obtain a dataset that matches the original structure of the data collected from the Medalla testnet, which can be seen below:

publickey index currentBalance effectiveBalance assigned executed skipped eligibilityEpoch activationEpoch exitEpoch withEpoch slashed
0x8e968b….77adc40b 55738 288.1698 32 3 3 0 6265 9144 NA NA FALSE
0xb228bd….f9be419b 53633 191.7624 32 2 0 2 5051 8618 NA NA FALSE
0xaf7cc1….18ee94ff 44766 191.6700 32 4 0 4 4155 6401 NA NA FALSE
0x91845a….048a358b 25550 190.1818 32 10 3 7 311 1404 NA NA FALSE
0x81ccb4….e5130868 34231 160.3910 32 9 7 2 3312 3768 NA NA FALSE
0xb8cd03….90adeb16 52757 159.7610 32 5 0 5 4984 8399 NA NA FALSE
0x8a6120….1bb56a62 23018 158.8510 32 16 12 4 119 771 NA NA FALSE
0xadeac9….8470b19c 14618 158.1426 32 17 3 14 0 0 NA NA FALSE
0x89cec3….82f251aa 21754 158.0481 32 14 0 14 36 455 NA NA FALSE
0x97bdad….f7972c3e 42259 128.3422 32 6 6 0 3993 5775 NA NA FALSE

# 3 Validator Performance

In this section, we will survey the results from our new analysis of validator performance, comparing and contrasting the old (Medalla) results to the new results. At a high level, our findings indicate that validator performance has generally improved across the board.

The Medalla testnet data spanned 15,450 epochs and included a total of 80,392 validators overall, beginning with the genesis block on August 4th, 2020. By contrast, as of this writing (April 21st, 2021) the Ethereum Mainnet data includes 31,592 epochs with slots assigned to 121,335 validators, starting from the genesis block on December 1st, 2020.

## 3.1 Activated Validators over Time

The Ethereum network is configured to activate four validators per epoch. Interestingly, some epochs within the Medalla test phase activated more than 4 validators, but this has not happened on the current Beacon chain. The graphs do point out epochs within the current mainnet have on-boarded fewer than 4 validators.

During the last 3000 or so epochs, there have been a number of attestation periods that have had fewer than the standard 4 validators activated within them. This can be seen through the deviation of the line from its linear pattern.

## 3.2 Distribution of the Number of Block Assignments per Validator

Constant validator inflows and outflows on the Medalla testnet saw a significant number of nodes without any assignments. One of the first hints that we have that validators on the mainnet have been performing well has been the larger average number of assignments. The peak of the distribution for the mainnet on the right shows that many validators have been assigned to at least five attestations. As we continue to track this specific distribution, it is likely to begin to skew leftward as the early cohorts successfully validate blocks.

## 3.3 Assigned, Executed and Skipped

When we begin to look more deeply at the breakdown of the assignments, executions and skips across the blocks we see a similar pattern where the steady performance, and time on the network of the validators, have increased with the average number successful proposals per validator. We can also see that the number of skipped slots has decreased dramatically when comparing the shape of the distribution and the average number between the two networks.

## 3.4 Execution Rate

Given what we’ve seen above it is no surprise that the execution rate, as measured by the number of executed blocks over assigned blocks, is much closer to 100% for all active validators on the Ethereum 2.0 mainnet.

## 3.5 Skipped Rate

Likewise the skip rate, a measure of the number of blocks skipped divided by the number assigned, has plummeted which suggests that the validators are completing their attestation duties completely and correctly.

## 3.6 Distribution of Time to Exit

The time to exit distribution is still right-skewed, with most exiting quite early. However, since the launching of mainnet, most misconfigured nodes actually leave within the first 200 hours. A secondary cluster peak was found at 1500 hours, which motivated the following analysis that looks into the time series of validator exits.

## 3.7 Exiting Validators by Epoch

When looking at the exiting validators by epoch compared to the Medalla testnet, we can immediately see that validators have taken their job more seriously. Only 144 validators have exited in the over 31,000 epochs tracked since the Beacon chain’s inception. Curiously, there was a noticeable spike between 14,000 and 15,000.

## 3.8 Exiting Validators by Epoch - Cumulative

Like the Medalla chain, the Beacon chain experiences spikes in exits over relatively short periods of time. Here between the Epoch 14000 and 15000 bands, we went from slightly over 40 exited validators to a little more than 140. On a positive note, there are much longer periods of time where there are either no exits or very few per epoch.

## 3.9 Slashing Over Time

Much like the exits over time, the number of slashings appear to occur in bulk where, again, the majority of slashings occurred during February 1st through 6th (epochs 14,000 and 15,000). As it turns out, this was due to a double-signing mishap by a single staking provider.

All in all, the theme of validators generally performing better than was see on the Medalla testnet is prevalent through most of this analysis. We now turn our attention to deriving a new set of tiers based on the new data.

# 4 Tier and Score Derivation

Using the Mainnet data, we performed the same feature derivation and clustering routine in order to attempt to derive a new tier distribution for the validators. However, when we applied the previous procedure directly to the mainnet data, our scoring system proved inadequate. Here is the distribution of scores obtained when applying the old procedure. You can immediately see that there is a lack of distinctiveness between tiers of validators, with the perfect validators linearly extrapolated along a scale, and imperfect validators below that. In other words, there is not enough differentiation obtained in order to neatly map this score distribution to validator tiers.

The reason for the breakdown is multi-fold. For the Medalla test data, the validator performance more naturally lent itself to a tier-based structure. The score thresholds were more distinct, and when analyzing the behavior within tiers, the behavior was more consistent. With the Mainnet data, largely due to the increase in validator performance across the board, our old thresholds failed to perform well - the vast majority of validators would have achieved a Tier 1 ranking. Furthermore, the distribution of scores was not normal, with several different areas along the curve with a skewed score distribution.

We then took an alternative approach. We decided to use the previous set of clusters to inform a pre-defined set of groupings that would provide coverage over the set of validators and their behavior. In this way, we ensure that the clusters themselves represent distinct and interesting behaviors with an implication as to their overall performance. Ultimately, we settled on the following 7 tiers:

• Tier 1: Perfect, at least 2 assignments
• Tier 2: Validators with 90% success rate
• Tier 3: Perfect Inexperienced Nodes (1 success & 0 skipped)
• Tier 4: Completely Inexperienced (0 and 0)
• Tier 5: >= .5 Success rate
• Tier 6: < .5 Success rate
• Tier 7: Slashed and Left

These thresholds allow validators to be placed into 7 tiers where we sort according to the number of assignments and time on the network. Within each tier, scores are derived using the numeric variables provided, preferring validators that have a large number of assignments, have been on the network the longest, and have the fewest skipped slots. Our pre-defined thresholds spread out the distribution of scores as we intended! When we compare the old scoring procedure to the new, the improvement in the distribution is immediately obvious.