S3-Compatible Storage
Kitbase uses file storage for OTA update files (app builds you push to your users). By default, self-hosted Kitbase stores these files locally on disk. For production deployments, you can use any S3-compatible storage provider — not just AWS S3.
Supported Providers
Kitbase uses the S3 API under the hood. Any provider that implements the S3-compatible API works out of the box:
| Provider | S3_ENDPOINT | Notes |
|---|---|---|
| AWS S3 | (leave empty) | Default. No endpoint needed |
| Cloudflare R2 | https://<account_id>.r2.cloudflarestorage.com | No egress fees |
| MinIO | http://minio:9000 | Self-hosted object storage |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com | — |
| Backblaze B2 | https://s3.<region>.backblazeb2.com | — |
| Google Cloud Storage | https://storage.googleapis.com | Via S3-compatible XML API |
Configuration
Add these variables to your .env file:
# Required — enables S3 storage (disables local storage)
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_BUCKET_NAME=my-bucket
S3_REGION=us-east-1
# Required for non-AWS providers — the S3-compatible API endpoint
S3_ENDPOINT=https://abc123.r2.cloudflarestorage.com
# Optional — custom public URL for serving files
# Use this if your files are served from a different domain (e.g., a CDN or custom domain)
S3_PUBLIC_URL=https://files.example.comThen restart:
docker compose up -dTIP
Setting S3_ACCESS_KEY is what switches Kitbase from local storage to S3. If it's empty, files are stored locally.
Provider Setup Guides
Cloudflare R2
- In the Cloudflare dashboard, go to R2 Object Storage and create a bucket.
- Go to R2 → Manage R2 API Tokens → Create API token.
- Copy the Access Key ID, Secret Access Key, and your Account ID (shown in the R2 dashboard URL).
S3_ACCESS_KEY=your-r2-access-key
S3_SECRET_KEY=your-r2-secret-key
S3_BUCKET_NAME=my-bucket
S3_REGION=auto
S3_ENDPOINT=https://<account_id>.r2.cloudflarestorage.comTo serve files from a custom domain, enable Public Access on the bucket and set:
S3_PUBLIC_URL=https://files.example.comMinIO
If you're already running MinIO (or want to add it to your Docker Compose setup):
S3_ACCESS_KEY=minioadmin
S3_SECRET_KEY=minioadmin
S3_BUCKET_NAME=kitbase
S3_REGION=us-east-1
S3_ENDPOINT=http://minio:9000
S3_PUBLIC_URL=http://your-server:9000/kitbaseDigitalOcean Spaces
- In the DigitalOcean dashboard, create a Space.
- Go to API → Spaces Keys → Generate New Key.
S3_ACCESS_KEY=your-spaces-key
S3_SECRET_KEY=your-spaces-secret
S3_BUCKET_NAME=my-space
S3_REGION=nyc3
S3_ENDPOINT=https://nyc3.digitaloceanspaces.com
S3_PUBLIC_URL=https://my-space.nyc3.digitaloceanspaces.comGoogle Cloud Storage
GCS offers an S3-compatible XML API using HMAC keys:
- In the Google Cloud Console, go to Cloud Storage → Settings → Interoperability.
- Create an HMAC key for a service account.
S3_ACCESS_KEY=GOOG1E...
S3_SECRET_KEY=your-hmac-secret
S3_BUCKET_NAME=my-gcs-bucket
S3_REGION=auto
S3_ENDPOINT=https://storage.googleapis.com
S3_PUBLIC_URL=https://storage.googleapis.com/my-gcs-bucketHow It Works
When S3_ACCESS_KEY is set, Kitbase uses the AWS S3 SDK with the configured endpoint. The SDK speaks the same S3 protocol regardless of the provider — only the endpoint URL changes.
S3_ENDPOINT— tells the SDK where to send API requests (uploads, downloads, deletions). Leave empty for AWS S3.S3_PUBLIC_URL— controls the URL returned when referencing stored files. If not set, defaults to the standard AWS S3 URL format (https://<bucket>.s3.<region>.amazonaws.com/<path>).- Presigned URLs — generated using the configured endpoint, so they work correctly with any provider.