Skip to main content

Data science

View All Tags

Opening up the ballot box (RF6 edition)

· 9 min read
Carl Cervone
Co-Founder

This is the final post in our Opening up the Ballot Box series for 2024. Changes planned for Retro Funding 2025 will likely reshape how we analyze voting behavior.

In RF6, our results (as an organization) reflected a critical issue with Retro Funding in its current form: subjective visibility often outweighs measurable, long-term impact.

We had two project submissions:

  1. Insights & Data Science (work like this series of posts): awarded 88K OP, the highest of any submission in the round.
  2. Onchain Impact Metrics Infra (open data pipelines for the Superchain): awarded 36K OP, despite being a much larger technical and community effort.

We are humbled by the support for our Insights & Data Science work. Retro Funding has made our work at OSO possible, and we are deeply grateful for this affirmation. But we can’t ignore the underlying signal: the work that is most visible-—like reports, frontends, and ad hoc analysis—-tends to receive higher funding than work that delivers deeper, longer-term impact.

Opening up the ballot box (RF5 edition)

· 13 min read
Carl Cervone
Co-Founder

Optimism’s Retro Funding Round 5 (RF5) just wrapped up, with 79 projects (out of ~130 applicants) awarded a total of 8M OP for their contributions to Ethereum and the OP Stack. You can find all the official details about the round here.

In some ways, this round felt like a return to earlier Retro Funding days. There were fewer projects than Rounds 3 and 4. Venerable teams like Protocol Guild, go-ethereum, and Solidity were back in the mix. Voters voted on projects instead of metrics.

However, RF5 also introduced several major twists: voting within categories, guest voters, and an expertise dimension. We’ll explain all these things in a minute.

Like our other posts in the “opening up the ballot box” canon, this post will analyze the shape of the rewards distribution curve and the preferences of individual voters using anonymized data. We’ll also deep dive into the results by category and compare expert vs non-expert voting patterns.

Finally, we'll tackle the key question this round sought to answer: do experts vote differently than non-experts? In our view, the answer is yes. We have a lot of data on this topic, so you're welcome to draw your own conclusions.

Introducing new Open Collective transactions datasets

· 3 min read
Javier Ríos
Engineer

Open Collective is a platform that enables groups to collect and disburse funds transparently. It is used by many open-source projects, communities, and other groups to fund their activities. Notable projects include Open Web Docs (maintainers of MDN Web Docs), Babel, and Webpack.

At Open Source Observer, we have been working on collecting and processing Open Collective data to make it available for analysis. This includes all transactions made on the platform, such as donations, expenses, and transfers. Datasets are updated weekly.

Opening up the ballot box again (RF4 edition)

· 8 min read
Carl Cervone
Co-Founder

The voting results for Optimism's Retro Funding Round 4 (RF4) were tallied last week and shared with the community.

This is the last in a series of posts on RF4, analyzing the ballot data from different angles. First, we cover high-level trends among voters. Then, we compare voters’ expressed preferences (from a pre-round survey) against their revealed preferences (from the voting data). Finally, we perform some clustering analysis on the votes and identify three distinct “blocs” of voters.

Retro Funding aims for iteration and improvement. We hope these insights can inform both the evolution of impact metrics and governance discussions around impact, badgeholder composition, and round design.

You can find links to our work here.

What’s been the impact of Retro Funding so far?

· 14 min read
Carl Cervone
Co-Founder

This post is a brief exploration of the before-after impact of Optimism’s Retro Funding (RF) on open source software (OSS) projects. For context, see some of our previous work on the Optimism ecosystem and especially this one from the start of RF3 in November 2023.

We explore:

  1. Cohort analysis. Most RF3 projects were also in RF2. However, most projects in RF4 are new to the game.
  2. Trends in developer activity before/after RF3. Builder numbers are up across the board since RF3, even when compared to a baseline cohort of other projects in the crypto ecosystem that have never received RF.
  3. Onchain activity before/after RF3. Activity is increasing for most onchain projects, especially returning ones. However, RF impact is hard to isolate because L2 activity is rising everywhere.
  4. Open source incentives. Over 50 projects turned their GitHubs public to apply for RF4. Will building in public become the norm or were they just trying to get into the round?

As always, we've included source code for all our analysis (and even CSV dumps of the underlying data), so you can check our work and draw your own conclusions.

A deeper dive on the impact metrics for Optimism Retro Funding 4

· 11 min read
Carl Cervone
Co-Founder

Voting for Optimism’s fourth round of Retroactive Public Goods Funding (“Retro Funding”) opened on June 27 and will run until July 11, 2024. You can check out the voting interface here.

As discussed in our companion post, Impact Metrics for Optimism Retro Funding 4, the round is a significant departure from the previous three rounds. This round, voters will be comparing just 16 metrics – and using their ballots to construct a weighting function that can be applied consistently to the roughly 200 projects in the round.

This post is a deeper dive on the work we did at Open Source Observer to help organize data about projects and prepare Optimism badgeholders for voting.

Reflections on Filecoin's first round of RetroPGF

· 10 min read
Carl Cervone
Co-Founder

Filecoin’s first RetroPGF round ("FIL RetroPGF 1") concluded last week, awarding nearly 200,000 FIL to 99 (out of 106 eligible) projects.

For a full discussion of the results, I strongly recommend reading Kiran Karra’s article for CryptoEconLab. It includes some excellent data visualizations as well as links to raw data and anonymized voting results.

This post will explore the results from a different angle, looking specifically at three aspects:

  1. How the round compared to Optimism’s most recent round (RetroPGF3)
  2. How impact was presented to badgeholders
  3. How open source software impact was rewarded by badgeholders

It will conclude with some brief thoughts on how metrics can help with evaluation in future RetroPGF rounds.

As always, you can view the analysis notebooks here and run your own analysis using Open Source Observer data by going here. If you want additional context for how the round was run, check out the complete Notion guide here.

Onchain impact metrics for Optimism Retro Funding 4

· 16 min read
Carl Cervone
Co-Founder

Open Source Observer is working with the Optimism Collective and its badgeholder community to develop a suite of impact metrics for assessing projects applying for Retro Funding 4.

Introduction

Retro Funding 4 is the Optimism Collective’s first experiment with Metrics-based Evaluation. The hypothesis is that by leveraging quantitative metrics, citizens are able to more accurately express their preferences for the types of impact they want to reward, as well as make more accurate judgements of the impact delivered by individual projects.

In contrast to other Retro Funding experiments, badgeholders will not vote on individual projects but will rather vote via selecting and weighting a number of metrics which measure different types of impact.

The Optimism Foundation has published high level guidance on the types of impact that will be rewarded:

  • Demand generated for Optimism blockspace
  • Interactions from repeat Optimism users
  • Interactions from Optimism users with high trust scores / onchain reputations
  • Interactions of new Optimism users
  • Open source license of contract code

The round is expected to receive applications from hundreds of projects building on six Superchain networks (OP Mainnet, Base, Frax, Metal, Mode, and Zora). Details for the round can be found here.

At Open Source Observer, our objective is to help the Optimism community arrive at up to 20 credible impact metrics that can be applied to projects with contracts on the Superchain.

This page explains where the metrics come from and includes a working list of all metrics under consideration for badgeholders. We will update it regularly, at least until the start of voting (June 23), to reflect the evolution of metrics. The first version metrics was released on 2024-05-16 and the most recent version (below) was released on 2024-06-24.

Trends and progress among OSS projects in Octant's latest epoch

· 10 min read
Carl Cervone
Co-Founder

Octant recently kicked off Epoch 3, its latest reward allocation round, featuring 30 projects. This round comes three months after Epoch 2, which had a total of 24 projects in it. There are 20 projects continuing on from Epoch 2 into Epoch 3 - including Open Source Observer.

During Epoch 2, we published a blog post with some high-level indicators about the 20+ open source software (OSS) projects participating in the round. In this post, we'll provide some insights about the new OSS projects and refresh our analysis for the returning projects.

Overall, in Epoch 3, Octant is helping support:

  • 26 (out of 30) projects with at least some recent OSS component to their work
  • 343 GitHub repos with regular activity
  • 651 developers making regular code commits or reviews

In the last 6 months, these 26 projects:

  • Attracted 881 first-time contributors
  • Closed over 4,646 issues (and created 4,856 new ones)
  • Merged over 9,745 pull requests (and opened 11,534 new ones)

Impact pools on Arbitrum: identifying projects that are driving ecosystem growth

· 10 min read
Carl Cervone
Co-Founder

In our last post, we provided a snapshot on the open source software projects building on Arbitrum. In this post, we will apply a series of experimental impact metrics to identify positive growth and network contribution trends across a cohort of more than 300 major projects on Arbitrum.

We believe impact metrics such as these are instrumental in helping the Arbitrum DAO better design incentives and allocate capital across its ecosystem. The metrics we've included are all derived from both onchain and off-chain project data. They include well-established crypto indicators like active users, sequencer fees, and transaction counts as well as common OSS metrics like full-time active developers, issues closed, and new contributors.

The real value, however, lies in combining simple metrics in novel ways to filter and benchmark projects' contributions. We introduce four "impact pools" that can assist with this type of analysis. The pools are:

  • Sustainable user growth: projects that not only bring large numbers of active users to the network but also retain and connect them easily to other dapps
  • Developer growth: projects with the most developer activity and new contributors to its GitHub repos in recent months
  • Blockspace demand: projects with the most transactions and sequencer fee contributions
  • Momentum: projects with a mix of positive developer and onchain user trends