Storage

Storage Optimizations

Scaling Storage


Here are some optimizations that you can consider to improve performance and reduce costs as you start scaling Storage.

Egress

If your project has high egress, these optimizations can help reducing it.

Resize images

Images typically make up most of your egress. By keeping them as small as possible, you can cut down on egress and boost your application's performance. You can take advantage of our Image Transformation service to optimize any image on the fly.

Set a high cache-control value

Using the browser cache can effectively lower your egress since the asset remains stored in the user's browser after the initial download. Setting a high cache-control value ensures the asset stays in the user's browser for an extended period, decreasing the need to download it from the server repeatedly. Read more here

Limit the upload size

You have the option to set a maximum upload size for your bucket. Doing this can prevent users from uploading and then downloading excessively large files. You can control the maximum file size by configuring this option at the bucket level.

Optimize listing objects

Once you have a substantial number of objects, you might observe that the supabase.storage.list() method starts to slow down. This occurs because the endpoint is quite generic and attempts to retrieve both folders and objects in a single query. While this approach is very useful for building features like the Storage viewer on the Supabase dashboard, it can impact performance with a large number of objects.

If your application don't need the entire hierarchy computed you can speed up drastically the query execution for listing your objects by creating a Postgres function as following:


_29
create or replace function list_objects(
_29
bucketid text,
_29
prefix text,
_29
limits int default 100,
_29
offsets int default 0
_29
) returns table (
_29
name text,
_29
id uuid,
_29
updated_at timestamptz,
_29
created_at timestamptz,
_29
last_accessed_at timestamptz,
_29
metadata jsonb
_29
) as $$
_29
begin
_29
return query SELECT
_29
objects.name,
_29
objects.id,
_29
objects.updated_at,
_29
objects.created_at,
_29
objects.last_accessed_at,
_29
objects.metadata
_29
FROM storage.objects
_29
WHERE objects.name like prefix || '%'
_29
AND bucket_id = bucketid
_29
ORDER BY name ASC
_29
LIMIT limits
_29
OFFSET offsets;
_29
end;
_29
$$ language plpgsql stable;

You can then use the your Postgres function as following:

Using SQL:


_10
select * from list_objects('bucket_id', '', 100, 0);

Using the SDK:


_10
const { data, error } = await supabase.rpc('list_objects', {
_10
bucketid: 'yourbucket',
_10
prefix: '',
_10
limit: 100,
_10
offset: 0,
_10
})

Optimizing RLS

When creating RLS policies against the storage tables you can add indexes to the interested columns to speed up the lookup