If it really is the case that supabase is banned, I siggest quickly to buy the 10$ a month addon for a costum domain...
We have 10 production projects running, 2 free Tiers for testing. No issues on any the whole week. All PRO, Micro and Small Tier, some low traffic (a few dozent users) up to a thousand users per day. So far we did not get any support request that indicates the Auth does not work or the DB is not reachable.
No, its ONCE per PROJECT, gradually over the week for all existing supabase projects. Not all the time one project down for a week.
There is an open discussion for this, pretty old but up to date. They definitelly plan the feature but I hope they will add this soon. We dont want to use another 3rd Party solution for this: https://github.com/orgs/supabase/discussions/8677
What auth flow are you using, implicit or default PKCE. You use SSR or you have an SPA? 2 important questions for clarifications, as there are dozens of solutions for your problem, this helps to narrow it down.
Q1 2026, unchanged
Try in the docker config : ports: - 127.0.0.1:8000:8000
Authentication I dont know, but you can just set a Firewall Rule on your machine and block external aceess to the Port of the Studio.
No problem, otherwise whole EU would not use AWS... just select datacenter in EU, maybe extra sign the Data agreement with Supabase and more than half of the work is on your end as developer, to make your product GDPR compliant.
What type is "supabase". What I think is, you copied some code from a framework, which uses packages, rather than vanilla JS... https://supabase.com/docs/reference/javascript/installing Slowly read here, will all be here in the docs on this page. Check you supabase connection url / domain and publishable key, if its correct.
1. Start here: https://supabase.com/features/row-level-security and yes, you should enable RLS on every table, yes. In general the default practise (for production software) 2. You test with supabase CLI locally, on your machine. Things to read: https://supabase.com/docs/guides/local-development/testing/overview 3. There are so many architectures and ways... whats best for you use-case. You can query admin data only over server for example, where you secret key bypases RLS. You could create a new role, you could check uder meta data in the RLS etc... 4. Just write at least the base, noone makes on their first time everything bullet proof. Be careful to write no redundant RLS, this costs big in performance. 5. If you enable RLS on a table and habe no rules set, it will ALWAYS fail or return no data, except on secure key which bypases RLS. You need to actuall write RLS rules, not just activate it. Good you think about security and start looking around. There is a big learning curve so keep going :)
What is "supabase", you use createClient from, maybe share that part too? The supabase key should be "publishable" or "secret" key. No "anon" key anymore, the auth got an update.
There were whole Blog posts, youtube videos and migration guides in their docs plus 3rd party packages updated etc. to this. Still there for current projects but old secrets are disbaled for new projects: https://supabase.com/blog/jwt-signing-keys Nearly half a year out now.
Supabase GIVES you that. Just read the docs for Supabase CLI, use supabase dump command, will dump you entire schmea locally in one .sql file. Read here: https://supabase.com/docs/guides/local-development/cli/getting-started Just install the CLI on your computer, supabase login, then supabaste init in a folder, supabase link to connect to the project you wanna dump locally, or make schema diff to get it as migration file. Then migration up, easy, you have a 1:1 copy without data locally. You can copy data too, just another command.
I see, so if you mean 1k people at peak at the SAME time, it will slow down big time. Again, depends heavily on your code, how big your tables are / how many data in it is, but supabase does not give you many concurrent connections on a free plan. If you hit a limit, just upgrade once to Pro and pick the plan you like there. We have pro and still ok small tier handle easily 300 CONCURRENT users without even noticing a bottleneck anywhere (for simple CRUD actions). I recommend a stress test, write an easy script that calls your functions, inserts, updates your table... open 1k connections etc... you can write a node.js or python test script pretty fast for that locally for you, then you know.
With the parameter anyone got from this post here solely noone could tell. Depends heavily on your code, what your app is about, how well written, what the DB does, security, concurrent peak and not "per day" but how many interactions in a short timeframe etc. This is not an LLM and does not know your project like you, you need to give some more details here for anyone to make a somehow reliable guess. If its just select, login maybe some inserts. Of course, if you use trigger or your RLS is not setup correctly and you have loops going forever, you will hit a limit fast. Concurrent connection are anyway limited too.
With the parameter anyone got from this post here solely noone could tell. Depends heavily on your code, what your app is about, how well written, what the DB does, security, concurrent peak and not "per day" but how many interactions in a short timeframe etc. This is not an LLM and does not know your project like you, you need to give some more details here for anyone to make a somehow reliable guess. If its just select, login maybe some inserts. Of course, if you use trigger or your RLS is not setup correctly and you have loops going forever, you will hit a limit fast. Concurrent connection are anyway limited too.
I see, normally you should not use production data for local testing, you seed it with fake data. But regarding dumping the data, supabase has a cli command with --data-only flag, you tried this? Here this thread has an answer: https://www.reddit.com/r/Supabase/s/Hu6Cvp5kid For local seeding (maybe adapt your workflow) this helps a lot here: https://supabase.com/blog/snaplet-is-now-open-source
Doesn't PITR generate physical backups, which are designed for granular restoration to a specific point in time within the Supabase platform itself, not for direct user download as a .gz file? Maybe this here helps to understand better: https://supabase.com/docs/guides/platform/backups
I dont really hear out the "why" to move away. Are docker containers so ressource intesive for you? Normally I would follow the rule "never touch a running system". Whats the benefit of moving, writing docs again from the ground up, migrating, etc. I dont know how big your DB is and your internal user base etc, but a time commitment like that should return something... more than a few GB saved RAM for example. Who knows what new features (auth, storage, etc) you maybe want in the future. As it maybe is now a very very small overhead, can safe big time in the future.
You have some triggers on your auth.users table? Thats often a sign for this. Check the supabase logs what the postgres query made to fail.
Supabase Self Host... but then you beed to do everything yourself to stay compliant, talk with an auditor maybe abou that. https://supabase.com/docs/guides/security/hipaa-compliance
What does the "network" tab show, in the chrome dev console for example. You should see a code there, if not, somethings maybe wrong with your creat client
Console log the error prop, that error you need.
Not typescript related probably. Send in the full error message from the console please.
Console log the error, not data, so you know what error it really is. On error, data will always be null...