Today we are launching the OSSCAR Index: the Open Source Supabase Commit Analytical Ranking. A quarterly ranking of the fastest-growing open source organizations, measured with a transparent, reproducible methodology. The site, the data, and the scoring code are all open source. The first edition covers Q1 2026 and is live now.
Open source has a ranking problem.
Most "top open source" lists rank by raw totals: stars, downloads, contributors. Those are real signals, and they accumulate for good reasons. They also tell you who was big yesterday, not who is growing today. The list of fastest-growing open source projects looks nothing like the list of largest ones, and right now there is no good way to find it.
We've been working with >commit on creating the definitive ranking of open-source projects.
What stood out in Q1 2026#
A few observations from this quarter's data:
- Openclaw was the breakout story of the quarter. It crossed the Scaling threshold by 236 stars on January 1 and finished the quarter at 365,000. It went from 29 contributors to 1,383, and from zero package downloads to 16.7 million. Growth like that almost never happens at this scale. It is one of the few projects in either division where every signal compounded at once.
- AI agents dominate Emerging. The majority of the top 10 Emerging projects are autonomous agent frameworks, AI-native developer tools, or AI-assisted personal workflows. Paperclip took the #1 Emerging spot, a small team building an autonomous business in the open with contributor counts that grew faster than almost anyone else this quarter.
- Not everything in the top is AI. Craft Docs, a productivity tool often described as a Notion alternative, came in at #3 Emerging on the back of 768,000 new npm downloads. npmx, a fast browser for the npm registry, picked up 237 new contributors and ranked #7. In Scaling, the Mantine UI library and the Free Ebook Foundation (the project behind Project Gutenberg) both made the top 100 on contributor growth alone. The methodology surfaces real momentum wherever it shows up, including in categories that have been quietly compounding for years.
- The rankings are genuinely global. Tsinghua University's MAIC, Sipeed (an AIoT hardware platform), Tencent Connect, DingTalk-Real-AI, and BIT-DataLab all appear in the top 100 across both divisions. Most "top open source" lists implicitly rank by signals visible to Western package registries and English-language attention. OSSCAR looks globally.
- Scaling is harder than Emerging. The bar to show up in the Scaling division is different. You need to already be large, and then grow faster than other large projects. Outside Openclaw, most of the Scaling top 10 grew at multiples between 0.5x and 5x. Modest in percentage terms, but on much larger absolute numbers. Every name in the Scaling top 10 is a project with real production users.
- A small "Claw" cluster has formed. Openclaw at #1 Scaling, ZeroClaw Labs at #2 Emerging, NullClaw at #15 Emerging, and a GoClaw fork already in the wild. Three months ago, none of these names existed.
What it measures#
The index ranks GitHub organizations by the rate at which their communities are growing, across three signals:
- Net new GitHub stars
- Unique contributors
- Package downloads from npm, PyPI, and Cargo
Each signal is normalized within a division so a 200-person team and a five-person team can be compared fairly. The three normalized scores are then combined into a single composite using an L² norm (the square root of the sum of squares). The L² norm rewards standout growth on a single signal vs more balanced growth across the board, and also doesn't penalize too much projects that are missing a specific metric, such as a library without a published package, for example.
The OSSCAR Index focuses on growth, not size. A project with 800 stars that doubled in a quarter can outrank a project with 80,000 stars that added 5%.
Two divisions#
Ranking a new AI agent framework against Kubernetes is not useful. So the index splits organizations into two independent leaderboards based on their star count at the start of the quarter:
- Emerging: fewer than 1,000 stars at quarter-start
- Scaling: 1,000 stars or more at quarter-start
Divisions lock at quarter-start. Cross the threshold mid-quarter and you still compete in Emerging that cycle. This keeps the peer group fair and prevents projects from gaming which division they sit in.
How scoring works#
The score answers a simple question: how much faster is this project growing this quarter than its peers?
For each project we look at three things over the course of the quarter: how many new GitHub stars it picked up, how many new contributors showed up, and how many more package downloads it saw. Each of those gets compared against everyone else in the same division and turned into a number from 0 to 100. Combine them with an L² norm (the square root of the sum of squares) and you get a composite out of ~173.
A few things worth knowing:
- Small bases do not get a free ride. A project that goes from 2 stars to 20 is not treated as 10x growth. Minimum thresholds keep tiny numbers from producing absurd rates.
- Exceptional growth stands out. Being truly outstanding on a single signal now beats being merely good on all three. A project that goes huge on stars but has no package downloads can still top a well-rounded peer. We want breakout winners to be visible.
- Missing signals do not hurt you. A project with no published npm or PyPI package is scored only on the signals that apply to it, and if those signals are growing fast, it can still top the ranking. That's by design. We want to surface outliers wherever they show up, and as we add more signals in future releases, no project should be penalized for the metrics it doesn't have.
- No decline. If a signal went down over the quarter, it does not count against you. We rank growth, not loss.
The full methodology is on the site. The site itself, the data pipeline, and the scoring code are all on GitHub: commitvc/osscar. If you think a weighting is wrong, a data source is missing, or a division boundary should move, propose it. Read the code, open an issue, send a pull request. We mean it.
Why we are doing this#
Supabase is an open source company. We run on the open source ecosystem: Postgres, PostgREST, pg_vector, Deno, and dozens more tools. We want that ecosystem to be healthy, visible, and legible to developers, customers, and investors who are trying to find what is working. A good index helps new projects get discovered. Discovery helps contributors show up. Contributors ship features. Features create users. That flywheel is how open source compounds.
For the ranked projects#
If you appear in the Q1 2026 OSSCAR Index: congratulations. You can download a badge to promote your entry in the list from your entry on the website.
If you think you should be ranked and are not, check the methodology page first. The most common reasons:
- You are a personal account, not an organization
- Your signal volume falls below the padding threshold
- Your growth rate was flat or negative this quarter
We update quarterly. Q2 2026 data collection is already underway.
What's next#
Three things on the near-term roadmap:
- More package managers. Go modules, Rust crates beyond the current coverage, and a better story for projects that distribute via container images.
- A clearer RFC process for methodology changes. Today we review proposals and merge what makes sense. By the end of 2026 we want a public RFC process so that weighting changes, new signals, and division boundaries have a predictable path from idea to merged.
- Historical rankings. The first index is a snapshot. The second will be a trend. By the fourth quarter, we will have a picture of which projects sustain growth and which flash and fade.