Interpreting Supabase Grafana IO charts

Last edited: 2/3/2026

Supabase Grafana Installation Guide

There are two primary values that matter for IO:

  • Disk Throughput: how much data can be moved to and from disk per second
  • IOPS(Input/Output per second): how many read/write requests can be performed against your disk per second

Each compute instance has unique IO settings. The current baseline (sustained) and max (burst) limits are listed below.

Compute InstanceBaseline Throughput (MB/s)Max Throughput (MB/s)Baseline IOPSMax IOPS
Nano (free)5 MB/s261 MB/s250 IOPS11,800 IOPS
Micro11 MB/s261 MB/s500 IOPS11,800 IOPS
Small22 MB/s261 MB/s1,000 IOPS11,800 IOPS
Medium43 MB/s261 MB/s2,000 IOPS11,800 IOPS
Large79 MB/s594 MB/s3,600 IOPS20,000 IOPS
XL149 MB/s594 MB/s6,000 IOPS20,000 IOPS
2XL297 MB/s594 MB/s12,000 IOPS20,000 IOPS
4XL594 MB/s594 MB/s20,000 IOPS20,000 IOPS
8XL1,188 MB/s1,188 MB/s40,000 IOPS40,000 IOPS
12XL1,781 MB/s1,781 MB/s50,000 IOPS50,000 IOPS
16XL2,375 MB/s2,375 MB/s80,000 IOPS80,000 IOPS
24XL3,750 MB/s3,750 MB/s120,000 IOPS120,000 IOPS
24XL - Optimized CPU3,750 MB/s3,750 MB/s120,000 IOPS120,000 IOPS
24XL - Optimized Memory3,750 MB/s3,750 MB/s120,000 IOPS120,000 IOPS
24XL - High Memory3,750 MB/s3,750 MB/s120,000 IOPS120,000 IOPS
48XL5,000 MB/s5,000 MB/s240,000 IOPS240,000 IOPS
48XL - Optimized CPU5,000 MB/s5,000 MB/s240,000 IOPS240,000 IOPS
48XL - Optimized Memory5,000 MB/s5,000 MB/s240,000 IOPS240,000 IOPS
48XL - High Memory5,000 MB/s5,000 MB/s240,000 IOPS240,000 IOPS

Compute sizes below XL can burst above baseline for short periods before returning back to their baseline behavior.

There are other metrics that indicate IO strain.

This example shows a 16XL database exhibiting severe IO strain:

image

Its Disk IOPS is constantly near peak capacity:

image

Its throughput is also high:

image

As a side-effect, its CPU is encumbered by heavy Busy IOWait activity:

image

Excessive IO usage is highly problematic as it clarifies that your database is expending more IO than it normally is intended to manage. This can be caused by

  • Excessive and needless sequential scans: poorly indexed tables force requests to scan disk (guide to resolve)
  • Too little cache: There is not enough memory, so instead of reading data from the memory cache, it is accessed from disk (guide to inspect)
  • Poorly optimized RLS policies: RLS that rely heavily on joins are more likely to hit disk. If possible, they should optimized (RLS best practice guide)
  • Excessive bloat: This is the least likely to cause major issues, but bloat can take up space, preventing data on disk from being placed in the same locality. This can force the database to scan more pages than necessary. (explainer guide)
  • Uploading high amounts of data: temporarily increase compute add-on size for the duration of the uploads
  • Insufficient memory: Sometimes an inadequate amount of memory forces queries to hit disk instead of the memory cache. Address memory issues (guide) can reduce disk strain.

If a database exhibited some of these metrics for prolonged periods, then there are a few primary approaches:

  • Scale the database to get more IO if possible
  • Optimize queries/tables or refactor database/app to reduce IO
  • Spin-up a read-replica
  • Modify IO configs in the Compute and Disk settings
  • Partitions: Generally should be used on very large tables to minimize data pulled from disk

Other useful Supabase Grafana guides:

Esoteric factors

Webhooks: Supabase webhooks use the pg_net extension to handle requests. The net.http_request_queue table isn't indexed to keep write costs low. However, if you upload millions of rows to a webhook-enabled table too quickly, it can significantly increase the read costs for the extension.

To check if reads are becoming expensive, run:

1
select count(*) as exact_count from net.http_request_queue;
2
-- the number should be relatively low <20,000

If you encounter this issue, you can either:

  1. Increase your compute size to help handle the large volume of requests.

  2. Truncate the table to clear the queue:

1
TRUNCATE net.http_request_queue;