Self-hosting is the migration target when you cannot use a managed platform. That is a smaller set of teams than the internet implies, and the trade-offs are real. Read this playbook only if you have already concluded that Supabase, Vercel, Firebase, or another managed offering does not work for your case.
If you are leaving base44 because of vendor lock-in or the production reliability problems but you do not have hard compliance constraints, the Next.js + Supabase path is faster and cheaper. Self-hosting is the right answer when something else forces it.
Why migrate to self-hosted
Four reasons that make self-hosting worth the operational cost:
- Compliance. HIPAA, PCI Level 1, SOC 2 with strict data-residency, FedRAMP, GDPR with EU-only storage. Base44 cannot help you here. Most managed platforms can, but at premium tiers. Self-hosted gives you the full audit trail.
- Cost predictability at scale. Past roughly 50,000 monthly active users or 100GB of data, managed platforms get expensive fast. A bare-metal Hetzner box at €40/mo handles workloads that cost $1,000+/mo on managed equivalents.
- Existing infrastructure. If your company already runs Kubernetes, Postgres, Vault, and Datadog, self-hosting is just "another service." The marginal cost is small.
- Full control. You can install any extension, run any cron, attach any sidecar, and audit any line of code. No vendor SLA. No surprise pricing changes. No platform-wide outages outside your control.
The cost: you own database operations, backups, security patches, monitoring, on-call, and capacity planning. If your team does not have those skills, this is not your migration.
What you keep, what you rebuild
| Layer | What you keep | What you rebuild |
|---|---|---|
| React components | 80–95% (JSX, Tailwind, hooks) | Anything calling @base44/sdk |
| Routing | URL structure | Re-implement on Next.js, Remix, or your framework |
| Schema | Field names + types as reference | Recreate as SQL DDL on your Postgres |
| Database rows | Data (export + load) | None |
| Authentication | User identifiers | Sessions (Keycloak / Authelia / custom) |
| RLS | Logic | Rewrite as Postgres policies or app-layer checks |
| Backend functions | Function bodies | Deploy as Node, Deno, or Go services |
| File uploads | Files | Re-upload to MinIO / S3 / your object store |
| Webhooks | Endpoints | New URLs, update senders |
| Scheduled jobs | None | Build with cron, systemd timers, or pg_cron |
| Monitoring | None | Stand up Prometheus + Grafana |
| Backups | None | pg_dump + WAL archiving |
| SSL | None | Caddy or Let's Encrypt |
| DNS + CDN | Domain | Reconfigure; add CloudFlare or Bunny |
You take on more than any other migration target. The trade is full control.
Architecture: source vs target
Base44 (current):
[browser] → CSR React (base44 hosted)
↓
@base44/sdk
↓
base44 platform (opaque managed runtime)
Self-hosted (target):
[browser] → Caddy (TLS termination + static)
↓
Next.js / API server (Docker container)
↓ ↓ ↓
Postgres MinIO Keycloak
(Docker) (Docker) (Docker)
↓
pg_dump + WAL → S3 (off-site backups)
↓
Prometheus + Grafana (monitoring)
↓
you operate all of it
For small apps, this fits on one $40 VPS. For serious workloads, split across machines, add a load balancer, replicate the database.
Step-by-step migration plan
Phase 1 — Discovery + infra design (Week 1–2)
1. Pick your hosting target
| Option | Best for | Trade-off |
|---|---|---|
| Single VPS (Hetzner / DO / Linode) | Small apps, simple stacks | Single point of failure |
| Multiple VPS + load balancer | Medium apps, basic HA | Manual capacity planning |
| Kubernetes (EKS / GKE / AKS) | Teams already on k8s | Operational complexity |
| Bare metal | Cost-sensitive at scale | Hardware ops |
| On-prem | Compliance forced | Everything ops |
Default to Docker Compose on a single Hetzner CCX-series box ($20–$80/mo) for migrations. Add complexity only when you outgrow it.
2. Inventory base44 surface area
Same drill as every migration:
grep -rn "base44\." src/ | tee migration/sdk-calls.txt
wc -l migration/sdk-calls.txt
Plus capture every entity, function, integration, scheduled task, and webhook. See the export code guide for the full inventory template.
3. Design your stack
Concrete starter stack we use most often:
- App server: Node 20 + Next.js (or Deno + Fresh, or Go + chi)
- Database: Postgres 16
- Auth: Keycloak or Authelia (OIDC) or a Node library like Lucia
- Object storage: MinIO (S3-compatible) or AWS S3 directly
- Reverse proxy + TLS: Caddy 2 (auto-renews Let's Encrypt)
- Monitoring: Prometheus + Grafana + Loki for logs
- Backups: pgBackRest with off-site retention to Backblaze B2 or S3
- Container runtime: Docker + Docker Compose (k8s for serious scale)
Phase 2 — Local stack (Week 2–3)
4. Stand up Docker Compose
# docker-compose.yml
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "app"]
interval: 10s
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
volumes:
- minio_data:/data
app:
build: ./app
environment:
DATABASE_URL: postgres://app:${POSTGRES_PASSWORD}@postgres:5432/app
S3_ENDPOINT: http://minio:9000
S3_ACCESS_KEY: ${MINIO_ROOT_USER}
S3_SECRET_KEY: ${MINIO_ROOT_PASSWORD}
depends_on:
postgres:
condition: service_healthy
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
volumes:
postgres_data:
minio_data:
caddy_data:
# Caddyfile
yourapp.com {
reverse_proxy app:3000
}
docker compose up and you have a working dev stack. Iterate locally, then deploy the same compose file to production.
5. Recreate the schema
Write your DDL as SQL files under migrations/. Apply with a tool like dbmate, goose, or flyway.
-- migrations/001_init.sql
CREATE TABLE projects (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
owner_id uuid NOT NULL,
name text NOT NULL,
status text NOT NULL DEFAULT 'draft',
created_at timestamptz NOT NULL DEFAULT now()
);
CREATE INDEX projects_owner_id_idx ON projects(owner_id);
6. Stand up auth
Keycloak is the standard for self-hosted OIDC. Add it to your compose file:
keycloak:
image: quay.io/keycloak/keycloak:24
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}
Configure a realm, a client for your app, and OIDC login flows. Your Next.js app uses next-auth with the Keycloak provider, or any OIDC library in any language.
For lighter setups, Lucia or Auth.js with email magic links works fine and skips the Keycloak operational overhead.
Phase 3 — Backend rebuild (Weeks 3–6)
7. Port backend functions
Each base44 backend function becomes an API route in your chosen framework. Same pattern as the Vercel migration — replace base44.entities.X calls with raw Postgres queries via pg, postgres.js, Drizzle, or Prisma.
// app/api/create-invoice/route.ts
import { NextResponse } from "next/server";
import { db } from "@/lib/db";
import Stripe from "stripe";
export async function POST(req: Request) {
const { project_id, amount } = await req.json();
const stripe = new Stripe(process.env.STRIPE_SECRET!);
const invoice = await stripe.invoices.create({ customer: "...", auto_advance: true });
await db.query(
`INSERT INTO invoices (project_id, stripe_id, amount) VALUES ($1, $2, $3)`,
[project_id, invoice.id, amount]
);
return NextResponse.json({ id: invoice.id });
}
8. Replace SDK calls in components
Same mechanical pass as every migration. Grep, replace, test, repeat.
Phase 4 — Data backfill (Week 6–7)
9. Export from base44, import to your Postgres
// scripts/import-projects.ts
import { Pool } from "pg";
import projects from "./export/projects.json";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function main() {
for (const batch of chunk(projects, 500)) {
const values = batch.flatMap((p) => [p.id, p.user_id, p.name, p.status, p.createdAt]);
const placeholders = batch
.map((_, i) => `($${i * 5 + 1}, $${i * 5 + 2}, $${i * 5 + 3}, $${i * 5 + 4}, $${i * 5 + 5})`)
.join(",");
await pool.query(
`INSERT INTO projects (id, owner_id, name, status, created_at) VALUES ${placeholders}`,
values
);
}
}
main().catch(console.error);
Validate row counts. Spot-check ten random rows per entity. Re-run if anything diverges.
Phase 5 — Production deploy (Week 7–9)
10. Provision your server
# On the target VPS
apt update && apt upgrade -y
apt install -y docker.io docker-compose-plugin ufw fail2ban
ufw allow 22/tcp && ufw allow 80/tcp && ufw allow 443/tcp && ufw enable
Clone your repo, copy .env, run docker compose up -d. Caddy fetches an SSL cert. Your app is live on the public domain.
11. Set up backups
# /etc/cron.d/postgres-backup
0 3 * * * docker exec postgres pg_dump -U app app | gzip > /backups/app-$(date +\%F).sql.gz
30 3 * * * find /backups -name '*.sql.gz' -mtime +7 -delete
0 4 * * * rclone copy /backups b2:backups
For point-in-time recovery, configure pgBackRest with archiving to S3. Document the restore procedure. Test it monthly.
12. Set up monitoring
# docker-compose.monitoring.yml
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- "3001:3000"
postgres-exporter:
image: prometheuscommunity/postgres-exporter
environment:
DATA_SOURCE_NAME: "postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/app?sslmode=disable"
Build dashboards for: app request rate, app error rate, p95 latency, Postgres connections, Postgres slow queries, disk usage, memory pressure. Wire alerts to Slack or PagerDuty.
Phase 6 — Cutover (Week 9–10)
13. Dual-run + DNS swap
Same shape as the Vercel cutover: dual-write, validate parity, swap DNS at low-traffic hour. Lock base44 read-only after.
Phase 7 — Steady state (Weeks 10+)
14. Operate
Patch the OS monthly. Patch Postgres minor versions monthly. Major Postgres upgrades annually. Rotate secrets quarterly. Test restores monthly. Review logs weekly. Welcome to ops.
Common pitfalls
1. Skipping backups. The single most common self-hosted disaster. Set up backups before you serve a single user.
2. Running Postgres without monitoring. When the disk fills or the connection pool exhausts, you find out from angry users. Prometheus + alerts catch it before they do.
3. SSL renewal failure. Caddy handles this for you; manually-managed Certbot setups break every 90 days because someone forgot to renew. Use Caddy.
4. Single-host deployment without backups means no DR. If the box dies, you lose everything. Off-site backups are non-negotiable; paid backup-as-a-service is fine if you do not want to manage it yourself.
5. Auth misconfiguration. Keycloak is powerful but easy to misconfigure. Stick with sensible defaults and read the OIDC spec section on token validation. Worth two days to do right.
6. Underestimating ops time. A self-hosted stack takes one to two engineer-days per month of routine maintenance, plus surge time for incidents. Budget it.
7. Over-engineering for "scale we might hit." A single VPS handles 5,000 daily active users without breaking a sweat. Do not start on Kubernetes.
Timeline + team
Ten to sixteen weeks with this team:
- One senior backend engineer. Owns the rebuild end-to-end. Forty hours per week.
- One DevOps / SRE engineer. Owns infra, deploy, monitoring, backups. Twenty to forty hours per week.
- One frontend engineer. Owns the React port. Twenty hours per week.
- One compliance officer (if regulated). Owns audit prep. Variable.
Smaller team is feasible only if you have a true full-stack senior who has shipped self-hosted before. Otherwise, two people minimum.
Cost
Migration tiers:
| Tier | Price | What you get |
|---|---|---|
| Small | $6,000 | Single-VPS Docker Compose, simple app, basic backups |
| Medium | $12,000 | Multi-host, monitoring stack, formal DR runbook |
| Enterprise | $25,000+ | Kubernetes, HA Postgres, full compliance prep, on-call handoff |
Ongoing infra: $50–$200/mo for small apps, $500–$2,000/mo for medium scale, $2,000+ for serious workloads with HA and observability.
DIY: same engineering time as any migration, plus a permanent ops tax of one to two days per month per engineer thereafter.
DIY vs hire decision
DIY this if:
- Your team has shipped and operated self-hosted services before.
- You have an SRE or DevOps engineer on staff.
- The compliance requirements are clear and you have someone who has navigated them.
- You can spare ten to sixteen weeks of focused time across two to three engineers.
Hire help if:
- You have never operated Postgres in production.
- You have compliance requirements with no in-house expertise.
- You need to be live in under twelve weeks.
- Your team is base44-only.
We typically partner with internal engineering teams for self-hosted migrations rather than handing off entirely. The handoff itself is the highest-risk step; do it slowly and document everything.
Want a free migration assessment?
We will look at your stack, your compliance constraints, and your existing infrastructure, and tell you whether self-hosting is the right call. Free thirty-minute call.
Book a free migration assessment
Related migrations
- Base44 to Next.js + Supabase — managed alternative if compliance does not require self-hosting.
- Base44 to Firebase — Google's managed offering, similar trade-offs to Supabase.
- When to leave base44 — decision framework if you are not sure migration is the right call.