Makes sense to build with Supabase if you’re using all the other feature from there. Latency will be lower. You could add clerk support later if you want to test speed and do a comparison. Then switch with flags (in code) to use one or the other depending on your results.
Most Cloud providers let you define secrets that can be reused across all deployments. That’s what we use. Secrets are the same and .env, named value pairs with name and value.
You should not be using the old anon and services role keys. Switch to the new JWT keys. On your question, whatever you’re using to deploy you should be using secrets or run time env vars so you don’t have to hard code everything.
Check the recent-ish history for this subreddit. Tons of people have already asked this question.
Well written article with some great point. Good work. 8 critical signals you should monitor to prevent performance collapse: *The Efficiency Markers* -- Cache Hit Ratio -- Index Usage -- Query Performance *Operational Bottlenecks* -- Long-running Queries -- Lock Contention -- Connection Exhaustion *Maintenance & Health* -- Table Bloat (Autovacuum) -- Transaction ID Wraparound
Any ISP in the world can block a Domain or IP address for any reason. Loads of services get block and unblocked daily.
Free = trial and is not production worthy. As a paid customer for the last 2.5 years it’s a solid product if you’ve read the documentation and know what you’re doing.
Why are we reposting the same story again and again. Check the thread history before posting please. The India ISP and any ISP block issue is solved. Check previous threads. https://www.reddit.com/r/Supabase/s/UzK0axMa2S
This is the same fix when the UAE has issues with a local ISP (a few months back) and Supabase connection as well, so you could make this a more generic explanation.
Supabase support team are awesome. I didn’t know about the branch limit so this is good to know. The pricing should been clearer. 👍🏽
User sessions should be capped depending on security requirements. I get my users to re-login after 7 days. An outage like this should not impact your user stats. Idle users should be logged out to save resources and logging back in should be simple, re-login popup etc.
Do you not have a dev environment to test the upgrade and see how it goes before running on prod?
Best solution is to either self host or fund the projects adequately. Nothing is totally free.
So this is where you should be crystal clear on “current state” vs “desired state”. If a migration file adjusts to meet the desired state then all delete and drop command are valid. If there’s an issue then there’s a problem with how the schema is being manipulated or changed in “desired state”.
I just use Google for auth, I don’t use manual usernames and passwords. Too much headache.
Reiterating the point already made. It’s like saying I’ve made an amazing car, can someone help build the engine, gears and steering, keys and security to get in to the car, internal electrics and dashboard. You’ve built the shell of a car, the easy part. If the idea is good you may want to get help from a freelance dev to properly architect this thing. Assuming you’ve vibe coded this, you’ll need your code documented so it can be understood to build it out further. Do you have any idea on a quality of your generated code? FWIW, moving from Google AI Studio to an actual product that can scale is also no easy feat. MVP to actual product takes planning, testing and more testing. Lots of others have already mentioned security and RLS so tread super carefully before you release anything.
You’ll need to plan the migration. Mine took a day with testing. I used the documentation and got Claude to help with the error handling. Here is the guide I used: https://supabase.com/docs/guides/auth/signing-keys
You can have a folder with a recent pg_dump of my db broken down by tables, views, policies, grants, functions etc. I make sure my claude.md file uses that schema before writing any sql. It’s not 100% but is much better than rough sql with fabricated fields. I always review all sql output and make corrections if need be. I don’t give write access to my db (MCP) and use migration files for syncing dev to staging and prod. You can also generate typeScript type, which might be helpful. Supabase APIs are generated from your database, which means that we can use database introspection to generate type-safe API definitions. https://supabase.com/docs/guides/api/rest/generating-types.
Are you migrating or rolling fresh?
Yeah use the new JWT signing keys…
1. Use the Performance and Security advisors. Super handy. 2. Posted this a few days ago, use JWT signing keys and unpack locally to save on auth server load. Game changer for speed if you are constantly polling the auth server. https://www.reddit.com/r/Supabase/s/ek9IkgIpVB 3. Everybody talks about RLS and maybe uses it but design your security before you implant it. Use a spreadsheet with tables and fields and determine the CRUD for each field for admin, authenticated and anonymous users. Doing this as you code is bad practice. Your schema will change but your core design should be in place. 4. Having a caching strategy for your db. You can save on db load for common queries. My current app caches 50-60% on the client side. Better for site speed and db load. 5. Keep your Postgres version up to date. 6. Don’t rely on any extensions that could get depreciated. I was using timescaledb and had to migrate away from it given supabase stopped supporting it. 7. Test, test, test, test. Have a process to test query performance and fine tune your approach. Sometimes a function is better than a complex query. Sometimes a server side query or function using the service_role is better than trying to give permissions to an auth’d user.
I have a folder with a recent pg_dump of my db broken down by tables, views, policies, grants, functions etc. I make sure my claude.md file uses that schema before writing any sql. It’s not 100% but is much better than rouge sql with fabricated fields. I always review all sql output and make corrections if need be. I don’t give write access to my db (MCP) and use migration files for syncing dev to staging and prod.
Is this just not a setting in client side auth to use stronger encryption? i.e. auth: { detectSessionInUrl: true, flowType: 'pkce', storage: { getItem: () => Promise.resolve('FETCHED_TOKEN'), setItem: () => {}, removeItem: () => {}, }, },
Whatever design you decide you will have to adjust it. 95% of fields won’t change but it’s the 5% that will either change type or need to be added. Be ready for that change. Have a process in place to handle those tweaks (branching, migration files, versioning etc.). Go look at developer.doordash.com, they share a lot about their architecture and design. Most of these delivery apps also have apis so you can see the fields they use and clone.
Agreed here. So many ways to work around this given your budget. People have already said, self-host. Or use other vanilla Postgres providers. Why not just use one instance for all apps? You can split your db in to logical partitions for each app i.e. prefixed with app_name_. Might get tricky but solves the issue. It still surprises me about not factoring the cost of tooling in side-projects. Not everything can be free so you have to get creative. I love the fact you migrated to SQLite but that’s in fact a step backwards IMHO. Postgres is much more capable, connection pools, functions/triggers, scale.
Google’s search index back in the day was all about columnar storage for faster reads of more metadata vs complex multi-table joins. These are the origins of NoSQL and still hold value. I know some large gaming companies that use Bigtable for game and user data. White-paper below. https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf
Let me ask why SQL over NoSQL?
NoSQL is normally better for heavy reads for game data, user inventory etc. Supabase is relational so as long as you’re not doing complex joins to get data you should be good. Auth is quick as well.
How much context does the Supabase MCP server use? I don’t use MCP servers unless they actually add exceptional value.
Unfortunately, these tests mean nothing unless you add in real world app use cases that have auth, RLS, routing, media handling, caching, complex joins and views. You need a sample app like elk (github.com/elk-zone/elk) to benchmark against.
You can pay to enable IPv4. This is what I use for pg_dump. I switch it on and off to use it. It’s like $4 a month. Should be in the Connect settings at the top of the page.
This is not a smart thing to do.
Change history for social posts. Nowhere near the level of data a financial app might use.
Take a look at timescaledb. It used to be an in built module in Supabase (postgresql extension) but now you have to install it separately. I was using it for my time-series data. Give it a try.
No idea what you’re trying to actually do here with the key, what stack you’re using or the problem you’re trying to solve. Keys should be in a secret manager or .env file and you let the stack manage the exchange and processing.
Nope, I tried this. Supabase is already using Cloudflare so you can’t add WAF to a service that has WAF. There may be a work around using an edge function to rate limit.
Can you share your MongoDB experience? Is it all it’s hyped up to be? Why the switch from NoSQL?
This is me. Once SQL schema changes stabilise this is the best isolated setup.
It’s not expensive at all and free gets you going really quickly.
1 database = 1 project (extra DBs cost more). So you’d have to share tables. Not sure anyone would agree to that.
For the love of all things sane. Please learn some coding basics first. Head over to YouTube search for a Supabase course (there’s a French dude who has a good series of explainer videos). Go over the basics of getting your environment set up and building an app basics.
Most people are fine with multi-tenant. That’s how Cloud works. Your post is overly dramatic. We didn’t just come to use Supabase for the DB. We came for auth, storage, the rest based API, RLS, the DB management tools, the backups, the infrastructure management. The list goes on. Most hyperscalers have vanilla Postgres, which is fine, but the fact we get everything we need to build an app in one place without having to wrestle with all the other functions is the point.
The backups are worth it even if you do them yourself. I always consider what I’d be willing to pay if I could have recovered my data (same philosophy for personal backups as well). If you’re cool with that then wait.
True. But the number of people who haven’t been able to recover paused project data from posts here has been high. Not been my experience as I’ve been paying for 2+ years after a great trial.
It does include the data, it doesn’t include anything in storage, that you have to do yourself.
$25 gets your nightly backups. PITR backups are more expensive, not sure if that’s $100 a month which might be wrong.
What part of there are no automated backups in the free plan is hard to understand? Yes you can roll your own but that’s your choice.
I still don’t understand why people are so obsessed with not wanting to pay for a great service. Free only takes you so far. With no automated backups you run the risk of losing all data if the project pauses. Do you really want to take that risk?
Cloudflare had a small outage earlier…
Getting working and tested first before committing to a sub. Did you enable ipv4 for the project, I think it’s disabled by default. What error message do you see in the Supabase logs or from the console?