Skip to main content

Audit of RF4 Impact Metric Calculations

Objective

Conduct a code and data audit of the 16 impact metrics implemented by Open Source Observer for Optimism Retro Funding 4.

The intended business logic of impact metrics will not change. The purpose of this audit is to identify any bugs/issues within the code. If any critical issues are found, then a fix will be pushed to the query logic before voting closes. Changes made in this way will not impact voting ballots, as badgeholders are voting on impact metrics not individual projects.

Context

A full write-up on the process OSO employed is available here and in a more distilled format here. All SQL models are available in interactive format here and from our GitHub.

The downstream models most relevant to this audit are available in interactive format here and in our GitHub here

Auditor requirements

The auditor will need a strong command of SQL and blockchain concepts. Experience with dbt and working in a BigQuery environment will be helpful.

Important: If you are interested in participating in the bounty, please join our Discord and provide a short introduction, including a link to your GitHub profile. We will review your profile and let you know if you are eligible to participate.

Scope of audit

The auditor is expected to audit the following components of the impact metrics model:

  1. Unified events model

    1. Audit the primary event models (eg, Events Daily to Project, 4337 Events), upstream source data (eg, Base Transactions, Base Traces), and the dbt macros used for intermediate processing (eg, contract_invocation_events, filtered_blockchain_events).

    2. Comment on the following aspects:

      1. Completeness and accuracy of underlying transaction and trace data for the six relevant chains
      2. Correct and consistent implementation of the RF4 transaction window (2023-10-01 to 2024-06-01)
      3. Correct and consistent implementation of the relevant RF4 onchain event types (eg, gas fees, contract interactions, etc.)
      4. Implementation of “special cases” for 4337 interactions (see here) and EOA bridges (see here)
  2. Contract discovery and attribution model

    1. Audit the contract discovery logic (eg, Derived Contracts, Contracts by Project) and upstream logic for identifying factories, deployers, and deterministic deployments

    2. Comment on the following aspects:

      1. Completeness and accuracy of the Contract by Project database, including efforts to de-dupe contracts deployed by creator factories such as Zora’s and pool factories such as Aerodrome’s
      2. Correct attribution of all contract artifacts included in approved project’s applications (see here)
  3. Trusted user model

    1. Audit the trusted user model logic (see here) and upstream source data (from Farcaster and reputation data providers)

    2. Comment on the following aspects:

      1. Completeness and accuracy of underlying source data
      2. Correct implementation of the trusted user model heuristics
  4. Impact metric implementation

    1. Audit the logic of the Impact Metrics by Project model and the 13 upstream models that power individual metrics

    2. Comment on the following aspects:

      1. Correct and consistent implementation of all 13 underlying metrics models and 3 log-transformed metrics models
      2. Correct and consistent implementation of the summary model that aggregates all 16 metrics and links them with project applications
  5. Open Source labeling

    1. Audit that project labels (see here) have been applied correctly based on the stated logic. Note that these checks are not implemented in SQL. You can replicate the API calls used for these checks here.

    2. Comment on the following aspects:

      1. Repo age checked correctly
      2. Repo license discovered correctly

Deliverable

The deliverable is a Google or Markdown doc that includes the following:

  1. Brief introduction of the auditor and their relevant experience
  2. Audit methodology and the environment used to test or replicate the data
  3. Audit findings, itemized by severity (critical, minor, OK):
    1. Critical means an issue that will almost certainly have a material effect on the metrics and should be addressed. Any critical issues should be accompanied by a CSV or a copy of the query logic needed to replicate the issue.
    2. Minor means an issue that is unlikely to have a material effect on the metrics but should be addressed in subsequent iterations or for peace of mind.
    3. OK means the auditor has reviewed but did not find any issue.
  4. [OPTIONAL] Recommendations for improving models for future funding rounds.

The final deliverable should be sent by email to carl[at]karibalabs[dot]co

Deadline

If you are interested in participating, you must let us know (via Discord) by 2024-07-03 23:59 UTC.

The submission deadline for audit reports is 2024-07-08 23:59 UTC.

Amount

Up to $1000 per audit, with up to three winning submissions. Partial payment may be awarded for strong submissions that only address a subset of the five components. Auditors will be required to KYC in order to receive payment.