The user discusses a comparison between Supabase and Nuvix, highlighting Nuvix's broader data and control approach. Nuvix offers multiple schema modes and built-in messaging, aiming for flexible data models and production-ready security.
A user is seeking community input for designing a new Backend-as-a-Service platform. They are interested in understanding developer preferences for API models, execution layers, realtime features, data formats, and custom logic languages. The user aims to align the platform with real development patterns.
Illustrious-Mail-587 introduces Nuvix, an open-source backend platform built in TypeScript. Nuvix offers flexible and powerful backend solutions with three schema types: Document, Managed, and Unmanaged. It integrates with PostgreSQL, Redis, and BullMQ, and includes a Next.js-based console. The platform also features a Storage API, Messaging API, and Auth system. The author seeks feedback and suggestions from the community.
The user is exploring a PostgREST-style client that builds queries into URL strings, offering full control over joins and filters on the client side. They compare this approach to Supabase's method, which relies on schema cache and foreign keys for automatic joins. The user seeks opinions on the transparency and debugging ease of explicit joins versus Supabase's automatic method.
i have a suggestion, can i dm you?
Good point. I have added an acknowledgements section in the repository crediting the Supabase Postgres project and its contributors since the Nix-based build setup builds on that work. Also appreciate the note. I am fairly new to working with Nix and Docker at this level, so the Supabase repo was a very helpful reference while putting this together.
can i dm you?
Are you using Supabase directly, or are you accessing it through your own backend?
are you using free tier or paid?
Why you want to use supabase?
The code has already been reviewed and improved. Nuvix now provides PostgreSQL 18 with all the extensions used in Supabase. Feel free to try it and see the results firsthand.
The conclusion is inaccurate. It would be better to review the code thoroughly before responding.
Hey everyone. I'm building an open-source BaaS called [Nuvix](https://github.com/nuvix-dev/nuvix) and needed a Postgres image that doesn't exist yet: PG 18.1 with the full extension stack (pgvector, PostGIS, pgsodium, Groonga, and about 25 others). Compiling everything naively lands you at 1GB+. Using Nix for the build toolchain and strict multi-stage Docker boundaries to strip compilers, debug symbols, and source artifacts, I got it to 431MB, multi-arch for amd64 and arm64. If you want to pull the standalone image and sandbox PG 18 locally: `docker pull nuvix/postgres:18.1` Full Dockerfile is here if you want to dig into how the multi-stage build is structured: [https://github.com/nuvix-dev/postgres](https://github.com/nuvix-dev/postgres) One thing worth noting: this image is built around Nuvix's internal schema structure, so it won't behave like a drop-in for the Supabase CLI. For anyone who's done similar work: are there compilation flags or Nix caching patterns you've found that push the size down further? Genuinely curious what veterans in this space are doing.
thanks!
update: spun up a live instance running this image if anyone wants to poke around - postgres 18.1, full extension stack, direct sql console open. email:`test@kraz.in` password: `testpass` dashboard: [https://studio.kraz.in](https://studio.kraz.in) 24 hours, do your worst. drop anything interesting in the comments.
If you’re self hosting, there are no hidden costs from Nuvix itself. You just pay for your own infrastructure and whatever your SMTP or email provider charges. No platform level surprise fees beyond that.
Yes, Nuvix can handle that. After your DB operation completes, just call the Nuvix Messaging API to send the email. It does not auto trigger on DB events, so you need to invoke it from your app logic or a background job. Docs: [https://nuvix-docs.vercel.app/products/messaging/smtp](https://nuvix-docs.vercel.app/products/messaging/smtp)
You were right on two of the points, and I’ve adjusted the implementation accordingly. On logging: raw API key material should not be written to logs under any circumstances. The logging layer now records only a boolean flag or a derived identifier depending on key type. Persistent keys log their ID. Stateless keys log a short SHA-256 fingerprint. No secrets or bearer tokens are persisted. On CORS: the fallback that could emit permissive headers has been removed. Header emission now depends strictly on validated origin resolution. Rejected origins do not receive wildcard headers. Preflight behavior remains explicit and enforced. On the AsyncLocalStorage discussion: request state in `Auth` is stored exclusively through `storage.getStore()` accessors. There are no static mutable fields holding auth state directly on the class. If evaluating isolation guarantees, the full repository context matters because correctness depends on how `storage` is instantiated and how request boundaries are established at the framework level. Reviewing a partial excerpt can easily lead to incorrect conclusions about scope semantics. That said, the core principle behind the critique is valid: shared mutable process state in a concurrent Node runtime is unsafe. The codebase has been re-audited specifically for that class of issue. Security review is valuable. Strong claims deserve strong validation, but equally, legitimate hardening suggestions should be incorporated without ego. The goal is robustness, not argument. If you identify anything further with a concrete code path, I am open to reviewing it.
https://github.com/nuvix-dev/nuvix/blob/main/apps/server/src/core/app-config.ts
You are asserting runtime behavior again without demonstrating it. Yes, static fields exist. That alone does not prove cross request contamination. What matters is execution scoping. Every request is wrapped inside storage.run(new Map(), () => { ... }) at the Fastify onRequest boundary. Inside that boundary, authorization state and flags are explicitly reinitialized before any userland handler runs. If you believe concurrent requests can step on each other, the correct way to prove that is: Spin up the server Fire concurrent requests with different auth contexts Demonstrate a privilege bleed or state crossover Provide the minimal reproducible test Without that, you are inferring a race condition based on surface structure, not observed behavior. On CORS, your claim still hinges on the idea that a rejected origin can meaningfully escalate access. Preflight requests are rejected when opts.origin is falsy. For non preflight requests, wildcard only applies in non credentialed flows. If you believe there is a credentialed wildcard response allowing sensitive cross origin access, show the exact request and resulting headers. CORS vulnerabilities are demonstrated with concrete request and response pairs, not theoretical branches. On logging API keys: yes, the secret is logged to an internal queue intentionally for audit purposes. That is a design choice. You can argue it should be hashed. That is fair. But to call it a security issue you need to show exposure. Internal telemetry in a protected system is not equivalent to plaintext keys in public logs. The claim that I “do not understand my own code” is strong language. If that is your position, the burden of proof is stronger than static inspection. Show the exploit path. Show the break. Show the failing isolation. Security review is adversarial validation, not confident narration. If you can demonstrate an actual cross request privilege leak, CORS bypass, or key exposure vector, I will address it immediately. If not, then what you are offering is architectural disagreement framed as vulnerability. Those are not the same thing.
You are making confident claims about systems you clearly did not take the time to understand. What you are calling a “global mutable auth state” is implemented using AsyncLocalStorage. That provides per-request isolation through async context propagation. It is not shared memory between requests. If you believe it is, demonstrate an actual cross-request privilege leak with a reproducible example. Otherwise you are confusing process-level scope with request-level context, which are not the same thing in modern Node runtimes. The hook system you dismiss as questionable architecture is a deliberate lifecycle abstraction. It functions similarly to middleware chains, guards, and interceptors in NestJS. The difference is that it is explicit and framework-agnostic. Under the hood it respects dependency boundaries, inversion of control, and separation of concerns. Services are injected. State is scoped. Execution order is deterministic. Calling it “vibe coded” because it does not look like the framework you are used to is not a security review. It is pattern bias. On CORS, you state it is “effectively broken” without identifying a single misconfigured origin rule, header path, or preflight bypass. Security claims require specifics. Show the route. Show the request. Show the exploit. Otherwise it is speculation dressed up as authority. If you believe API keys are logged in plaintext in production, point to the production logger configuration. If you believe verification codes are not cryptographically secure, measure the entropy and demonstrate the attack vector. Assertions without proof are not findings. You also imply this was rushed or thrown together. I started this project in December 2024. The commit history is public. Anyone can verify the first commit and see the progression over the past year. This was not written in a weekend. It is the result of sustained iteration, refactoring, and architectural refinement. The history is there for anyone who cares to look. The “1.0” label reflects API contract stability, not team size or corporate scale. Semantic versioning is about compatibility guarantees. It has nothing to do with how many engineers are on payroll or how many years something has been in production. There are single-developer systems that are stable and enterprise systems that are not. Headcount is not a metric for correctness. What is actually dangerous is publicly declaring something insecure without presenting a concrete exploit path. That creates noise, not safety. If you have a reproducible vulnerability, present it. I will address it immediately. If not, what you are offering is not a security review. It is an opinion delivered with unwarranted certainty. There is a difference between criticism and projection. Right now, this reads like the latter.
Thanks for the honest take, really appreciate it. Totally fair to be cautious. Maybe give Nuvix a try on a side project first and see how it feels. We’re working hard to earn that production-level trust.
Auth, storage, and messaging are fairly similar in concept to Appwrite. The main difference is the database model. Nuvix uses three schema modes (Document, Managed, and Unmanaged) so you can choose between rapid NoSQL-style development, secure-by-default auto-RLS CRUD, or fully manual SQL control depending on the project. Realtime APIs and edge/serverless functions are not there yet but are already on the roadmap. SSO and Keycloak integration are also planned, while multiple OAuth providers are already supported. Nuvix is stateless, so scaling on Docker Swarm is straightforward by horizontally scaling the API containers and keeping state external. I’m also currently stuck in a loop deciding whether parts should move to Rust or not 😅, but I’ll figure that out soon.
https://docs.nuvix.in I will fix that
Schema exclude is only for client APIs. Server APIs still have access because they rely on scopes, not schema exposure rules. Per-user row access is handled through the powerful permission system, where access rules are evaluated against the authenticated user context. https://preview.redd.it/3jg33p5dffkg1.png?width=1350&format=png&auto=webp&s=e0e946d8a5bd01d0065cca1204c3a67f2534e5b6
However, you still need to apply this for every table. What happens if one table is missed?
https://preview.redd.it/u0w30iivcfkg1.png?width=1349&format=png&auto=webp&s=0e876402560a1e1ae224956e1e821c0d4c96e05e
The setup you used to restrict PostgREST to the backend feels unnecessarily complicated. In Nuvix, you can simply define which schemas should be accessible to the client, making the configuration much easier and cleaner.
I am developing Nuvix, a Supabase alternative: [https://github.com/Nuvix-Tech/nuvix](https://github.com/Nuvix-Tech/nuvix). MCP support is not yet implemented and is part of the upcoming roadmap.
totally fair point. I am building a Supabase-like backend and trying to learn how people actually use it in production, not how docs assume it is used. Even one small answer, like API vs direct Postgres, helps avoid bad design decisions and repeated mistakes. Not collecting metrics or promoting anything, just learning from real-world experience.
YES
This is unnecessary. Simply use [Nuvix](https://github.com/Nuvix-Tech/nuvix), as it handles everything end to end.
[https://github.com/Nuvix-Tech/nuvix](https://github.com/Nuvix-Tech/nuvix)
The thoughts are mine. I just let AI clean up the wording so it doesn’t read like I typed it at 2 a.m. the analysis is fully human
try this one [https://files.catbox.moe/j4y1jg.png](https://files.catbox.moe/j4y1jg.png)
Yes, exactly. Nuvix already supports ReBAC through its label system, team relationships, and resource-level linking. It lets you express “who can access what” based on actual connections between users, teams, and entities instead of fixed roles. That flexibility is a major reason it can handle multi-tenant and collaborative patterns without piling on custom policy logic.
Nuvix takes a broader approach to data and control. It offers three schema modes: a NoSQL-style document model, a managed SQL model with automatic RLS, policies, and permission tables, and an unmanaged SQL mode for full freedom. On top of that, it includes built-in messaging for email, SMS, and push. The goal is to give you flexible data models and production-ready security without extra setup.
Hey, what about this [https://i.ibb.co/ymgXxHGn/251111-07h55m16s-screenshot.png](https://i.ibb.co/ymgXxHGn/251111-07h55m16s-screenshot.png)
A **visual RLS builder** is an excellent concept and would deliver substantial value to the Supabase community, especially for teams managing multi-tenant environments. Although your tool focuses on generating precise **PostgreSQL RLS policy SQL**, many developers who require broader flexibility are adopting declarative **Role-Based Access Control (RBAC)** models. Platforms like **Nuvix** illustrate how a configuration-driven permission layer can streamline access management by abstracting the raw database logic.  Despite the shift toward RBAC, your visual builder remains highly relevant. It can simplify the RLS authoring process, reduce onboarding complexity, and serve as a bridge for teams not yet ready to transition to a full RBAC-based architecture. You should definitely build it.
Yeah, totally — Supabase’s style is super clean for simple stuff. The main difference is that Supabase relies on **foreign keys and schema cache** to figure out joins automatically. My design doesn’t — joins are written explicitly in the client, so you can link any tables, even if they don’t have FKs or live in different schemas. Supabase’s SDK is nice, but once queries get complex it becomes awkward. For example, something like: select * from players where ((team_id = 'CHN' and age > 35) or (team_id != 'CHN' and age is not null)); ends up as: .or('and(team_id.eq.CHN,age.gt.35),and(team_id.neq.CHN,.not.age.is.null)') — not exactly type-safe or easy to maintain. My approach keeps joins and filters as structured calls that compile to readable URLs. No foreign keys needed, and what you write is exactly what gets sent to the backend — transparent, explicit, and easy to debug.
Fair point, but every “what if” today shapes the tools we use tomorrow.
Efficiency isn’t spoon-feeding. It’s just using better utensils.
Would you be interested in exploring a platform that offers similar capabilities but with enhanced ease of use and includes the features mentioned in the post?