
What if getting funded didn’t require a pitch deck—or even applying at all?
Early on, what got me excited was the idea of blockchain as a public database. That alone lets you do a lot of interesting things. One obvious use case is sending money—and that’s still the main thing blockchains are used for. Another is the idea of a smart contract.
The best explanation I’ve heard of a smart contract is a vending machine: you put money in, and you get something out. It works exactly like that—you put something in, and based on code and rules, something comes back out. No middlemen needed.
As I spent more time in the space, I started thinking more about the architecture. What I find really interesting about blockchain is that it flips the typical relationship between front-end and back-end. In most apps, the back-end is private—you control it, and only the front-ends you authorize can talk to it. With blockchain, the back-end is public—anyone in the world can build a front-end that interacts with it. That level of openness is pretty unique and opens up so many creative possibilities.
But what I’m perhaps most passionate about—and what really kept me in web3—is thinking about how it can solve double selling of impact. I come from the nonprofit world, and I saw that people would make an impact once and then get a marketing team to sell that same story to as many funders as possible. And they’d walk away with a lot of the money. Whereas people who kept trying to actually create new impact—but didn’t put effort into marketing—would fall short.
So, the idea of saying, “Okay, I created this impact, I recorded it on a public database, someone bought it, and now I have to create the next impact to earn again,” got me excited.
It started with
Quadratic funding was different. You apply, get accepted into a round, and then rally support—your friends, your community, maybe some visibility on Twitter Spaces. You get a few direct contributions and then matching funds based on how many people supported your project. I started out getting maybe $500, $1,000. From there, it snowballed.
I started working with
Over time, I started to see a broader pattern. A lot of what I was drawn to had to do with solving market failures—externalities, where people create more value than they can capture; principal–agent problems, where decision-makers act in their own interest instead of the group’s. And then there’s information asymmetry—where the best people for a role or a grant might not even know it exists.
That’s when I also started to tweet. I was posting every day, sharing ideas and surfacing opportunities. It was my way of trying to reduce that asymmetry and make the ecosystem a bit more transparent.
At some point, I tweeted about being interested in AI, governance, and quantifying impact—especially in relation to externalities. Like, say you’ve created a software library that everyone uses but no one pays for. You’ve clearly created value, but how do you measure that?
Could we use AI to estimate how much someone deserves based on the value they’ve created? That’s the first step—just trying to make that invisible value visible.
After that tweet,
That’s what led to me joining the Ethereum Foundation. I started working on that mechanism, and eventually, they asked me to lead it—running competitions that apply this approach to different challenges across Ethereum: funding predictions, repo importance, bot detection, and misinformation. The structure is always the same—many AI models submit answers, and we reward the ones that best align with a trusted signal.
Exactly. Open source creates a lot of value, but the people building it often don’t capture much of it—and that’s really the core issue with public goods.
The way I define them is simple: value creation minus value capture. If you're creating more than you're capturing, that gap should be filled through public goods funding. And if you're capturing more than you create—like a company that pollutes while still turning a profit—you should probably be taxed to make up the difference.
But a lot of the conversation has started to move beyond the term “public goods.” Asking “How valuable is this for the world?” is really hard to answer. That’s why more people are thinking in terms of dependency graphs instead.
A dependency graph asks a different question: “For this project’s success, how important was this other project?” That’s much easier to answer. It gives you a way to trace actual relationships and dependencies between projects.
That framing matters. Imagine a city where everything is a private good. No public roads, no parks, no sanitation—just things you pay for directly. It’s not a place most people would want to live. I’m happy to give up part of my income to make sure shared infrastructure exists.
And the same goes for an ecosystem… If you only fund projects that earn revenue, you lose the basic building blocks that others depend on. A shop needs a road. An app needs core infrastructure. Most things we interact with are built on top of tools that don’t get paid directly.
Dependency graphs help make that visible, and they also help with accountability.
There’s some fatigue around the term “public goods.” It can feel like if you don’t have a revenue model, you just label yourself that and expect funding. But if you can say, “There are 20 projects that rely on us,” that’s clearer. It’s a better way to show value.
Yes! The core problem deep funding addresses is how to fund projects that create value without generating revenue. In a corporate setting, you have cost centers and profit centers. Twitter’s ad team is a profit center—it makes money and is easy to measure. But something like Community Notes is a cost center. It improves the product for everyone, but it doesn’t directly generate revenue.
Still, executives can allocate budget to it from the ad team, even if that team objects. But that kind of internal reallocation doesn’t exist in open source. Companies like Google earn revenue from user-facing services but rely on a stack of open source repos that go unfunded. There’s no automatic way to redirect revenue from profit to cost centers.
It’s all programmatic. Once the graph and weights are set, the money flows in automatically unless governance steps in to change it. But the hard part is setting the weights… How much should each repo earn?
To solve that, we run a machine learning competition. Different people submit models that predict how important each repo is. A jury scores a subset of the repos, which we use to pick the best-performing model. That model’s predictions are then used to assign weights to the whole graph.
When an app earns revenue, funds are distributed across the graph, reaching both the projects that generate value directly and the ones supporting them in the background.
In the context of web3, we’ve had about one new decentralized funding system every two years. In 2019, quadratic funding. In 2021, retrospective public goods funding (RetroPGF) came up. In 2023, there was
Deep funding is modular by design. Instead of building everything in-house, we work with teams already solving key pieces of the problem. For example, we worked with
By plugging in the best tools for each part, it becomes way faster to spin up new funding mechanisms. That’s also the goal of
For sure!
But when we built the dependency graph, it turned out it’s barely used anymore. Tools like Viem and Viper have taken over. web3.js still showed up at first, but once we looked at the data, the usage didn’t hold up, and it eventually dropped out.
That kind of shift is easy to miss if you're going off reputation. With a graph, you can see what’s still in use and what’s not. That’s not to say web3.js wasn’t important—it clearly was. And if you’re funding based on past impact, it still makes sense to include it. But if you’re focused on what’s active today, the graph shows where things have moved.
Right now, the most common funding mechanism in web3 is still, unfortunately, centralized grant-giving, meaning that a foundation decides who gets funding. I say “unfortunately” because a lot of it depends on how much work you put into outreach—talking to people, staying visible, building relationships. But the best developers often aren’t interested in that. They want to focus on building, not selling themselves.
There’s also a power dynamic. When you're applying to a foundation, there’s often a feeling that you need to win someone over to get funded. That’s why something like quadratic funding is exciting—it shifts power away from a few individuals. If the community believes in your work, you get funded. It’s more bottom-up.
But even QF has its limits. One of the big ones is what I call the crying babies get fed problem: Only the people who apply and promote themselves get money. So having at least one funding mechanism that doesn’t rely on applications feels really important.
That’s where RetroPGF can have lots of potential. It started with badgeholder voting—basically giving a small group of people the power to decide how funds get distributed. But if that group is just five people, it’s not much different from a foundation. Optimism later scaled that to 500
What I’m most excited about is where RetroPGF is heading: a version that doesn’t require applications at all. Instead of waiting for people to ask for funding, you look at what’s actually being used—who’s creating value, what others are building on—and you fund based on that. That’s the ideal outcome: you make something useful, it helps others, and the value comes back to you automatically. No forms, no pitches, no chasing grants.
Whether that scales depends on a few things, though. The first challenge is
There’s also the retroactive angle. Most people want to fund what’s coming next—not what has already happened. That might make RetroPGF harder to scale unless the feedback loops are faster. Like—if someone does something valuable this month, they get funded next month. That kind of rhythm might make it feel more relevant.
The main metric I’ve set for myself is how many machine learning competitions we run.
We just wrapped one on predicting how much funding a project would receive in a grant round—$20,000 in prizes across six winners, which was cool. Now we’ve got a few more running: one on detecting Sybil wallets, one on ranking open source repos by importance, and another on forecasting future funding. They’re all part of a broader push to run fun, useful ML competitions that make funding smarter.
The main focus right now is