Advanced_Pudding9228 shares a detailed setup for implementing Row Level Security (RLS) in Supabase for projects owned by users and shared with teammates. The setup includes SQL code for creating tables, enabling RLS, and defining policies to manage permissions. Illustrious-Mail-587 suggests using Nuvix as an alternative solution.
The user discusses the realization that a Micro SaaS needs a proper promotion pipeline instead of directly shipping to production. They mention common issues like hotfixes breaking signups and billing bugs. The user seeks feedback from others on their deployment strategies and offers advice on common pitfalls.
What you’re running into is a boundary problem, not a Supabase feature gap. Right now your app is treating state and payment as client-side events. localStorage “feels” like a database and a payment button “feels” like a payment, but neither is authoritative. That’s why clearing cookies wipes users and why clicking a Lemon Squeezy link upgrades people without paying. The mental shift is this: the browser never decides who is paid. The backend does, after a verified signal. In Supabase terms, you don’t want to store credits or plans in the browser at all. You want a table keyed by auth.users.id, and that table only ever gets updated by trusted backend code. The frontend can read it, but it shouldn’t be able to grant itself anything. Same with payments. Lemon Squeezy should never “unlock” anything directly. It should call a backend endpoint with a signed webhook after payment succeeds. That backend verifies the signature, figures out which Supabase user the payment belongs to, and then updates the user’s row. Until that webhook fires and is verified, nothing changes. Once you do it this way, both of your problems disappear at the same time. Users can log in from any device because their state lives in Supabase, not localStorage. And paid plans only activate when Lemon Squeezy proves the payment actually happened. There are a couple of important details in the middle that are easy to get wrong, like how you map a Lemon Squeezy checkout to a Supabase user, and where verification should happen so you don’t accidentally open a hole. That part depends on whether you’re using Edge Functions, a separate backend, or something else.
This usually isn’t that Edge Functions are “unreliable”, it’s that push delivery is a bad fit for how edge runtimes behave under load. Edge Functions are short lived, CPU and time constrained, and aggressively shut down when they hit limits or when the runtime decides the work is too long or blocking. Push notifications look simple, but in practice they involve network calls to Apple or Google, retries, token fan out, and sometimes waiting on responses. That combination is exactly what triggers the silent “shutdown” behavior you’re seeing. The reason it works briefly is that low volume sends stay under the execution and memory thresholds. Once volume or latency spikes, the runtime kills the process before completion and you get no meaningful error back. The stable pattern is to let the Edge Function do only orchestration. It should validate the request, write an event to the database or queue, and return immediately. Actual push delivery should run in a long lived worker, a background job, or a service that is designed for retries and sustained connections. Expo and OneSignal feel heavy, but they solve this exact class of problem. If you want to keep Supabase in the loop, one approach is Edge Function to enqueue, then a separate worker or scheduled job to send pushes. That gives you observability, retries, and removes the edge runtime from the critical path. So no, you’re not crazy, and no, this isn’t well documented. You’ve basically hit the boundary between request oriented compute and background delivery, and push notifications live firmly on the background side.
The key mental model is that Git branches and Supabase environments are two separate levers. Branching code is easy. Keeping data, secrets, edge functions, and migrations truly isolated is the part that saves you from accidental prod damage. If you get the separation and promotion flow right early, merges become boring and confidence goes up fast.
Yea as long as they use NUVIX
In the workflow I described, branching the frontend in GitHub doesn’t magically create a new Supabase environment. Both main and the deploy branch will use whatever connection string you give them via env vars. If you only have one Supabase project, dev and prod are literally the same database. If you want a true dev/prod split for Supabase you need: • two Supabase projects (dev + prod) • different connection strings in your env vars for each environment My post was just about moving the frontend build to GitHub + Cloudflare. Supabase dev/prod needs its own setup on top of that.