Skip to main content

Early experiments with synthetic controls and causal inference

· 4 min read
Carl Cervone
Co-Founder

We’ve been thinking a lot about advanced metrics lately. We want to get better at measuring how specific types of interventions impact the public goods ecosystem.

For example, we frequently seek to compare the performance of projects or users who received token incentives against those who did not.

However, unlike controlled A/B testing, we’re analyzing a real-world economy. It's impossible to randomize treatment and control groups in a real-world economy.

Instead, we can use advanced statistical methods to estimate the causal effect of treatments on target cohorts while controling for other factors like market conditions, competing incentives, and geopolitical events.

This post explores our early experiments with synthetic controls and causal inference in the context of crypto network economies.

Opening up the ballot box (RF5 edition)

· 13 min read
Carl Cervone
Co-Founder

Optimism’s Retro Funding Round 5 (RF5) just wrapped up, with 79 projects (out of ~130 applicants) awarded a total of 8M OP for their contributions to Ethereum and the OP Stack. You can find all the official details about the round here.

In some ways, this round felt like a return to earlier Retro Funding days. There were fewer projects than Rounds 3 and 4. Venerable teams like Protocol Guild, go-ethereum, and Solidity were back in the mix. Voters voted on projects instead of metrics.

However, RF5 also introduced several major twists: voting within categories, guest voters, and an expertise dimension. We’ll explain all these things in a minute.

Like our other posts in the “opening up the ballot box” canon, this post will analyze the shape of the rewards distribution curve and the preferences of individual voters using anonymized data. We’ll also deep dive into the results by category and compare expert vs non-expert voting patterns.

Finally, we'll tackle the key question this round sought to answer: do experts vote differently than non-experts? In our view, the answer is yes. We have a lot of data on this topic, so you're welcome to draw your own conclusions.

Introducing new Open Collective transactions datasets

· 3 min read
Javier Ríos
Engineer

Open Collective is a platform that enables groups to collect and disburse funds transparently. It is used by many open-source projects, communities, and other groups to fund their activities. Notable projects include Open Web Docs (maintainers of MDN Web Docs), Babel, and Webpack.

At Open Source Observer, we have been working on collecting and processing Open Collective data to make it available for analysis. This includes all transactions made on the platform, such as donations, expenses, and transfers. Datasets are updated weekly.

Building a network of Impact Data Scientists

· 10 min read
Carl Cervone
Co-Founder

One of our primary goals at Kariba (the team behind Open Source Observer) is to build a network of Impact Data Scientists. However, “Impact Data Scientist” isn’t a career path that currently exists. It’s not even a job description that currently exists.

This post is a first step in changing that. In it, we discuss:

  1. Why we think the Impact Data Scientist is an important job of the future
  2. The characteristics and job spec of an Impact Data Scientist
  3. Ways to get involved if you are an aspiring Impact Data Scientist

One important caveat. This post is focused on building a network of Impact Data Scientists that serve crypto open source software ecosystems. In the long run, we hope to see Impact Data Scientists work in all sorts of domains. We are starting in crypto because there is already a strong culture around supporting open source software and decentralizing grantmaking decisions. We hope this culture of building in public and experimenting crosses over to non-crypto grantmaking ecosystems. When it does, we’d love to help build a network of Impact Data Scientists in those places too!

WAR for public goods, or why we need more advanced metrics in crypto

· 9 min read
Carl Cervone
Co-Founder

In baseball, there’s an advanced statistic called WAR, short for Wins Above Replacement. It measures a player’s overall contribution to their team by comparing them to a “replacement-level” player—a hypothetical average player who could easily be brought in from the bench or minor leagues. The higher a player’s WAR, the more valuable they are to their team.

Now, let’s apply this concept to decentralized networks like Ethereum or its Layer 2s, which steward ecosystems of public goods including infrastructure, libraries, and permissionless protocols.

Just as baseball teams aim to build the best roster, ecosystem funds and crypto foundations strive to create the strongest community of developers and users within their networks. They attract these participants through incentive programs like grants and airdrops.

But how can the success of these initiatives be effectively measured? One approach is to evaluate how well these programs retain community members and generate compounding network effects compared to the average across the broader crypto landscape. The best networks are the ones that achieve the highest WAR outright or per unit of capital allocated.

This post explores how an empirically-derived metric similar to WAR might be applied to ecosystem grants programs as a way of measuring ROI. It includes some use case ideas (like a WAR oracle) and strawman WAR formulas for protocols and open source software (OSS) projects. It concludes with some ideas for getting started.

While this is currently a thought experiment, it’s something we at OSO are seriously considering as we develop more advanced metrics for measuring impact.

Opening up the ballot box again (RF4 edition)

· 8 min read
Carl Cervone
Co-Founder

The voting results for Optimism's Retro Funding Round 4 (RF4) were tallied last week and shared with the community.

This is the last in a series of posts on RF4, analyzing the ballot data from different angles. First, we cover high-level trends among voters. Then, we compare voters’ expressed preferences (from a pre-round survey) against their revealed preferences (from the voting data). Finally, we perform some clustering analysis on the votes and identify three distinct “blocs” of voters.

Retro Funding aims for iteration and improvement. We hope these insights can inform both the evolution of impact metrics and governance discussions around impact, badgeholder composition, and round design.

You can find links to our work here.

OSO Data Portal: free live datasets open to the public

· 3 min read
Raymond Cheng
Co-Founder

At Open Source Observer, we have been committed to building everything in the open from the very beginning. Today, we take that openness to the next level by launching the OSO Data Exchange on Google BigQuery. Here, we will publish every data set we have as live, up-to-date, and free to use datasets. In addition to sharing every model in the OSO production data pipeline, we are sharing source data for blocks/transactions/traces across 7 chains in the OP Superchain (including Optimism, Base, Frax, Metal, Mode, PGN, Zora), Gitcoin Data, and OpenRank. This builds on the existing BigQuery public data ecosystem that includes GitHub, Ethereum, Farcaster, and Lens data. To learn more, check out the data portal here:

opensource.observer/data

data portal

What’s been the impact of Retro Funding so far?

· 14 min read
Carl Cervone
Co-Founder

This post is a brief exploration of the before-after impact of Optimism’s Retro Funding (RF) on open source software (OSS) projects. For context, see some of our previous work on the Optimism ecosystem and especially this one from the start of RF3 in November 2023.

We explore:

  1. Cohort analysis. Most RF3 projects were also in RF2. However, most projects in RF4 are new to the game.
  2. Trends in developer activity before/after RF3. Builder numbers are up across the board since RF3, even when compared to a baseline cohort of other projects in the crypto ecosystem that have never received RF.
  3. Onchain activity before/after RF3. Activity is increasing for most onchain projects, especially returning ones. However, RF impact is hard to isolate because L2 activity is rising everywhere.
  4. Open source incentives. Over 50 projects turned their GitHubs public to apply for RF4. Will building in public become the norm or were they just trying to get into the round?

As always, we've included source code for all our analysis (and even CSV dumps of the underlying data), so you can check our work and draw your own conclusions.

A deeper dive on the impact metrics for Optimism Retro Funding 4

· 11 min read
Carl Cervone
Co-Founder

Voting for Optimism’s fourth round of Retroactive Public Goods Funding (“Retro Funding”) opened on June 27 and will run until July 11, 2024. You can check out the voting interface here.

As discussed in our companion post, Impact Metrics for Optimism Retro Funding 4, the round is a significant departure from the previous three rounds. This round, voters will be comparing just 16 metrics – and using their ballots to construct a weighting function that can be applied consistently to the roughly 200 projects in the round.

This post is a deeper dive on the work we did at Open Source Observer to help organize data about projects and prepare Optimism badgeholders for voting.

Reflections on Filecoin's first round of RetroPGF

· 10 min read
Carl Cervone
Co-Founder

Filecoin’s first RetroPGF round ("FIL RetroPGF 1") concluded last week, awarding nearly 200,000 FIL to 99 (out of 106 eligible) projects.

For a full discussion of the results, I strongly recommend reading Kiran Karra’s article for CryptoEconLab. It includes some excellent data visualizations as well as links to raw data and anonymized voting results.

This post will explore the results from a different angle, looking specifically at three aspects:

  1. How the round compared to Optimism’s most recent round (RetroPGF3)
  2. How impact was presented to badgeholders
  3. How open source software impact was rewarded by badgeholders

It will conclude with some brief thoughts on how metrics can help with evaluation in future RetroPGF rounds.

As always, you can view the analysis notebooks here and run your own analysis using Open Source Observer data by going here. If you want additional context for how the round was run, check out the complete Notion guide here.