Skip to content

S3-Compatible Storage

Kitbase uses file storage for OTA update files (app builds you push to your users). By default, self-hosted Kitbase stores these files locally on disk. For production deployments, you can use any S3-compatible storage provider — not just AWS S3.

Supported Providers

Kitbase uses the S3 API under the hood. Any provider that implements the S3-compatible API works out of the box:

ProviderS3_ENDPOINTNotes
AWS S3(leave empty)Default. No endpoint needed
Cloudflare R2https://<account_id>.r2.cloudflarestorage.comNo egress fees
MinIOhttp://minio:9000Self-hosted object storage
DigitalOcean Spaceshttps://<region>.digitaloceanspaces.com
Backblaze B2https://s3.<region>.backblazeb2.com
Google Cloud Storagehttps://storage.googleapis.comVia S3-compatible XML API

Configuration

Add these variables to your .env file:

bash
# Required — enables S3 storage (disables local storage)
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_BUCKET_NAME=my-bucket
S3_REGION=us-east-1

# Required for non-AWS providers — the S3-compatible API endpoint
S3_ENDPOINT=https://abc123.r2.cloudflarestorage.com

# Optional — custom public URL for serving files
# Use this if your files are served from a different domain (e.g., a CDN or custom domain)
S3_PUBLIC_URL=https://files.example.com

Then restart:

bash
docker compose up -d

TIP

Setting S3_ACCESS_KEY is what switches Kitbase from local storage to S3. If it's empty, files are stored locally.

Provider Setup Guides

Cloudflare R2

  1. In the Cloudflare dashboard, go to R2 Object Storage and create a bucket.
  2. Go to R2Manage R2 API TokensCreate API token.
  3. Copy the Access Key ID, Secret Access Key, and your Account ID (shown in the R2 dashboard URL).
bash
S3_ACCESS_KEY=your-r2-access-key
S3_SECRET_KEY=your-r2-secret-key
S3_BUCKET_NAME=my-bucket
S3_REGION=auto
S3_ENDPOINT=https://<account_id>.r2.cloudflarestorage.com

To serve files from a custom domain, enable Public Access on the bucket and set:

bash
S3_PUBLIC_URL=https://files.example.com

MinIO

If you're already running MinIO (or want to add it to your Docker Compose setup):

bash
S3_ACCESS_KEY=minioadmin
S3_SECRET_KEY=minioadmin
S3_BUCKET_NAME=kitbase
S3_REGION=us-east-1
S3_ENDPOINT=http://minio:9000
S3_PUBLIC_URL=http://your-server:9000/kitbase

DigitalOcean Spaces

  1. In the DigitalOcean dashboard, create a Space.
  2. Go to APISpaces KeysGenerate New Key.
bash
S3_ACCESS_KEY=your-spaces-key
S3_SECRET_KEY=your-spaces-secret
S3_BUCKET_NAME=my-space
S3_REGION=nyc3
S3_ENDPOINT=https://nyc3.digitaloceanspaces.com
S3_PUBLIC_URL=https://my-space.nyc3.digitaloceanspaces.com

Google Cloud Storage

GCS offers an S3-compatible XML API using HMAC keys:

  1. In the Google Cloud Console, go to Cloud StorageSettingsInteroperability.
  2. Create an HMAC key for a service account.
bash
S3_ACCESS_KEY=GOOG1E...
S3_SECRET_KEY=your-hmac-secret
S3_BUCKET_NAME=my-gcs-bucket
S3_REGION=auto
S3_ENDPOINT=https://storage.googleapis.com
S3_PUBLIC_URL=https://storage.googleapis.com/my-gcs-bucket

How It Works

When S3_ACCESS_KEY is set, Kitbase uses the AWS S3 SDK with the configured endpoint. The SDK speaks the same S3 protocol regardless of the provider — only the endpoint URL changes.

  • S3_ENDPOINT — tells the SDK where to send API requests (uploads, downloads, deletions). Leave empty for AWS S3.
  • S3_PUBLIC_URL — controls the URL returned when referencing stored files. If not set, defaults to the standard AWS S3 URL format (https://<bucket>.s3.<region>.amazonaws.com/<path>).
  • Presigned URLs — generated using the configured endpoint, so they work correctly with any provider.

Released under the MIT License.