Skip to main content

Perspective

View All Tags

Early experiments with synthetic controls and causal inference

· 4 min read
Carl Cervone
Co-Founder

We’ve been thinking a lot about advanced metrics lately. We want to get better at measuring how specific types of interventions impact the public goods ecosystem.

For example, we frequently seek to compare the performance of projects or users who received token incentives against those who did not.

However, unlike controlled A/B testing, we’re analyzing a real-world economy. It's impossible to randomize treatment and control groups in a real-world economy.

Instead, we can use advanced statistical methods to estimate the causal effect of treatments on target cohorts while controling for other factors like market conditions, competing incentives, and geopolitical events.

This post explores our early experiments with synthetic controls and causal inference in the context of crypto network economies.

WAR for public goods, or why we need more advanced metrics in crypto

· 9 min read
Carl Cervone
Co-Founder

In baseball, there’s an advanced statistic called WAR, short for Wins Above Replacement. It measures a player’s overall contribution to their team by comparing them to a “replacement-level” player—a hypothetical average player who could easily be brought in from the bench or minor leagues. The higher a player’s WAR, the more valuable they are to their team.

Now, let’s apply this concept to decentralized networks like Ethereum or its Layer 2s, which steward ecosystems of public goods including infrastructure, libraries, and permissionless protocols.

Just as baseball teams aim to build the best roster, ecosystem funds and crypto foundations strive to create the strongest community of developers and users within their networks. They attract these participants through incentive programs like grants and airdrops.

But how can the success of these initiatives be effectively measured? One approach is to evaluate how well these programs retain community members and generate compounding network effects compared to the average across the broader crypto landscape. The best networks are the ones that achieve the highest WAR outright or per unit of capital allocated.

This post explores how an empirically-derived metric similar to WAR might be applied to ecosystem grants programs as a way of measuring ROI. It includes some use case ideas (like a WAR oracle) and strawman WAR formulas for protocols and open source software (OSS) projects. It concludes with some ideas for getting started.

While this is currently a thought experiment, it’s something we at OSO are seriously considering as we develop more advanced metrics for measuring impact.

Fund your dependencies

· 4 min read
Carl Cervone
Co-Founder

Messari just released their annual Crypto Theses for 2024. This year’s report included a chart from a16z’s State of Crypto 2023 showing npm downloads for three of the leading packages used by decentralized apps going up and to the right, reaching all time highs in late 2023. Messari founder Ryan Selkis said “If I could invest blindly into crypto based on a single chart, it’s this one.”

image

There’s a lot to love about this take, but one big problem: downloads are a terrible metric for monitoring ecosystem growth.

Levels of the game: the psychology of RetroPGF and how to build a better game

· 14 min read
Carl Cervone
Co-Founder

RetroPGF is a unique kind of repeated game. With each round, we are iterating on both the rules and the composition of players. These things matter a lot. To get better, we need to study whether the rules and player dynamics are having the intended effect.

This post looks at the psychology of the game during Round 3, identifies mechanics that might have caused us to deviate from our intended strategy, and suggests ways of mitigating such issues in the future.

Disclaimer: I was a voter and had a project in Round 3. I also made a lot of Lists.

Open Source, Open Data, Open Infra

· 5 min read
Raymond Cheng
Co-Founder

How Open Source Observer commit to being the most open and reliable source of impact metrics out there.

At Kariba Labs, we believe deeply in the power of open source software. That is why we are building Open Source Observer (aka OSO), an open source tool for measuring the impact of open source projects. In order to achieve our goal of making open source better for everyone, we believe that OSO needs more than just open source code. We are committed to being the most open and reliable source of impact metrics out there. We will achieve this by committing the OSO project to the following practices:

  • Open source software: All code is developed using permissive licenses (e.g. MIT/Apache 2.0)

  • Open data: All collected and processed data will be openly shared with the community (to the extent allowed by terms of service)

  • Open infrastructure: We will open up our infrastructure for anyone to contribute or build upon our existing infrastructure at-cost.