Skip to main content

Opening up the ballot box again (RF4 edition)

· 8 min read
Carl Cervone
Co-Founder

The voting results for Optimism's Retro Funding Round 4 (RF4) were tallied last week and shared with the community.

This is the last in a series of posts on RF4, analyzing the ballot data from different angles. First, we cover high-level trends among voters. Then, we compare voters’ expressed preferences (from a pre-round survey) against their revealed preferences (from the voting data). Finally, we perform some clustering analysis on the votes and identify three distinct “blocs” of voters.

Retro Funding aims for iteration and improvement. We hope these insights can inform both the evolution of impact metrics and governance discussions around impact, badgeholder composition, and round design.

You can find links to our work here.

OSO Data Portal: free live datasets open to the public

· 3 min read
Raymond Cheng
Co-Founder

At Open Source Observer, we have been committed to building everything in the open from the very beginning. Today, we take that openness to the next level by launching the OSO Data Exchange on Google BigQuery. Here, we will publish every data set we have as live, up-to-date, and free to use datasets. In addition to sharing every model in the OSO production data pipeline, we are sharing source data for blocks/transactions/traces across 7 chains in the OP Superchain (including Optimism, Base, Frax, Metal, Mode, PGN, Zora), Gitcoin Data, and OpenRank. This builds on the existing BigQuery public data ecosystem that includes GitHub, Ethereum, Farcaster, and Lens data. To learn more, check out the data portal here:

opensource.observer/data

data portal

What’s been the impact of Retro Funding so far?

· 14 min read
Carl Cervone
Co-Founder

This post is a brief exploration of the before-after impact of Optimism’s Retro Funding (RF) on open source software (OSS) projects. For context, see some of our previous work on the Optimism ecosystem and especially this one from the start of RF3 in November 2023.

We explore:

  1. Cohort analysis. Most RF3 projects were also in RF2. However, most projects in RF4 are new to the game.
  2. Trends in developer activity before/after RF3. Builder numbers are up across the board since RF3, even when compared to a baseline cohort of other projects in the crypto ecosystem that have never received RF.
  3. Onchain activity before/after RF3. Activity is increasing for most onchain projects, especially returning ones. However, RF impact is hard to isolate because L2 activity is rising everywhere.
  4. Open source incentives. Over 50 projects turned their GitHubs public to apply for RF4. Will building in public become the norm or were they just trying to get into the round?

As always, we've included source code for all our analysis (and even CSV dumps of the underlying data), so you can check our work and draw your own conclusions.

A deeper dive on the impact metrics for Optimism Retro Funding 4

· 11 min read
Carl Cervone
Co-Founder

Voting for Optimism’s fourth round of Retroactive Public Goods Funding (“Retro Funding”) opened on June 27 and will run until July 11, 2024. You can check out the voting interface here.

As discussed in our companion post, Impact Metrics for Optimism Retro Funding 4, the round is a significant departure from the previous three rounds. This round, voters will be comparing just 16 metrics – and using their ballots to construct a weighting function that can be applied consistently to the roughly 200 projects in the round.

This post is a deeper dive on the work we did at Open Source Observer to help organize data about projects and prepare Optimism badgeholders for voting.

Reflections on Filecoin's first round of RetroPGF

· 10 min read
Carl Cervone
Co-Founder

Filecoin’s first RetroPGF round ("FIL RetroPGF 1") concluded last week, awarding nearly 200,000 FIL to 99 (out of 106 eligible) projects.

For a full discussion of the results, I strongly recommend reading Kiran Karra’s article for CryptoEconLab. It includes some excellent data visualizations as well as links to raw data and anonymized voting results.

This post will explore the results from a different angle, looking specifically at three aspects:

  1. How the round compared to Optimism’s most recent round (RetroPGF3)
  2. How impact was presented to badgeholders
  3. How open source software impact was rewarded by badgeholders

It will conclude with some brief thoughts on how metrics can help with evaluation in future RetroPGF rounds.

As always, you can view the analysis notebooks here and run your own analysis using Open Source Observer data by going here. If you want additional context for how the round was run, check out the complete Notion guide here.

Onchain impact metrics for Optimism Retro Funding 4

· 16 min read
Carl Cervone
Co-Founder

Open Source Observer is working with the Optimism Collective and its badgeholder community to develop a suite of impact metrics for assessing projects applying for Retro Funding 4.

Introduction

Retro Funding 4 is the Optimism Collective’s first experiment with Metrics-based Evaluation. The hypothesis is that by leveraging quantitative metrics, citizens are able to more accurately express their preferences for the types of impact they want to reward, as well as make more accurate judgements of the impact delivered by individual projects.

In contrast to other Retro Funding experiments, badgeholders will not vote on individual projects but will rather vote via selecting and weighting a number of metrics which measure different types of impact.

The Optimism Foundation has published high level guidance on the types of impact that will be rewarded:

  • Demand generated for Optimism blockspace
  • Interactions from repeat Optimism users
  • Interactions from Optimism users with high trust scores / onchain reputations
  • Interactions of new Optimism users
  • Open source license of contract code

The round is expected to receive applications from hundreds of projects building on six Superchain networks (OP Mainnet, Base, Frax, Metal, Mode, and Zora). Details for the round can be found here.

At Open Source Observer, our objective is to help the Optimism community arrive at up to 20 credible impact metrics that can be applied to projects with contracts on the Superchain.

This page explains where the metrics come from and includes a working list of all metrics under consideration for badgeholders. We will update it regularly, at least until the start of voting (June 23), to reflect the evolution of metrics. The first version metrics was released on 2024-05-16 and the most recent version (below) was released on 2024-06-24.

Trends and progress among OSS projects in Octant's latest epoch

· 10 min read
Carl Cervone
Co-Founder

Octant recently kicked off Epoch 3, its latest reward allocation round, featuring 30 projects. This round comes three months after Epoch 2, which had a total of 24 projects in it. There are 20 projects continuing on from Epoch 2 into Epoch 3 - including Open Source Observer.

During Epoch 2, we published a blog post with some high-level indicators about the 20+ open source software (OSS) projects participating in the round. In this post, we'll provide some insights about the new OSS projects and refresh our analysis for the returning projects.

Overall, in Epoch 3, Octant is helping support:

  • 26 (out of 30) projects with at least some recent OSS component to their work
  • 343 GitHub repos with regular activity
  • 651 developers making regular code commits or reviews

In the last 6 months, these 26 projects:

  • Attracted 881 first-time contributors
  • Closed over 4,646 issues (and created 4,856 new ones)
  • Merged over 9,745 pull requests (and opened 11,534 new ones)

Request for Impact Metrics

· 10 min read
Carl Cervone
Co-Founder

Over the past few months, we've been hard at work creating the infrastructure to collect and analyze impact metrics for open source projects. We're excited to announce that we're ready to start doing some new analysis on data from open source projects ... and we need your help!

This post includes some of the domains we're interested in exploring as well as an initial list of impact metrics we'd like to collect. We're looking for feedback on these metrics and suggestions for additional metrics we should consider. We're also looking for contributors to help us apply these impact metrics.

Get Involved

If you'd like to get involved, here's what to do:

  1. Apply to join the Data Collective. It only takes a few minutes. We'll review, and reach out to schedule an onboarding call.
  2. Join our Discord server and say hello.
  3. Get inspiration from our Colab directory of starter notebooks for impact metrics. We also have of them in our Insights repo if you prefer to run them locally.
  4. Consult our docs and especially our impact metric spec as you prepare your analysis.

The rest of this post includes more details on the types of impact metrics we're interested in.

Building a network of Impact Data Scientists

· 10 min read
Carl Cervone
Co-Founder

One of our primary goals at Kariba (the team behind Open Source Observer) is to build a network of Impact Data Scientists. However, “Impact Data Scientist” isn’t a career path that currently exists. It’s not even a job description that currently exists.

This post is our first step in trying to change that. In it, we discuss:

  1. Why we think the Impact Data Scientist is an important job of the future

  2. The characteristics and job spec of an Impact Data Scientist

  3. Ways to get involved if you are an aspiring Impact Data Scientist

    Spoiler alert: join this groupchat and apply for data access here

One important caveat. This post is focused on building a network of Impact Data Scientists that serve crypto open source software ecosystems. In the long run, we hope to see Impact Data Scientists work in all sorts of domains. We are starting in crypto because there is already a strong culture around supporting open source software and decentralizing grantmaking decisions. We hope this culture of building in public and experimenting crosses over to non-crypto grantmaking ecosystems. When it does, we’d love to help build a network of Impact Data Scientists in those places too!

Impact pools on Arbitrum: identifying projects that are driving ecosystem growth

· 10 min read
Carl Cervone
Co-Founder

In our last post, we provided a snapshot on the open source software projects building on Arbitrum. In this post, we will apply a series of experimental impact metrics to identify positive growth and network contribution trends across a cohort of more than 300 major projects on Arbitrum.

We believe impact metrics such as these are instrumental in helping the Arbitrum DAO better design incentives and allocate capital across its ecosystem. The metrics we've included are all derived from both onchain and off-chain project data. They include well-established crypto indicators like active users, sequencer fees, and transaction counts as well as common OSS metrics like full-time active developers, issues closed, and new contributors.

The real value, however, lies in combining simple metrics in novel ways to filter and benchmark projects' contributions. We introduce four "impact pools" that can assist with this type of analysis. The pools are:

  • Sustainable user growth: projects that not only bring large numbers of active users to the network but also retain and connect them easily to other dapps
  • Developer growth: projects with the most developer activity and new contributors to its GitHub repos in recent months
  • Blockspace demand: projects with the most transactions and sequencer fee contributions
  • Momentum: projects with a mix of positive developer and onchain user trends