BASE44DEVS

MIGRATION · BASE44 SELF-HOSTED (DOCKER + POSTGRES + NODE)

Migrate Base44 to Self-Hosted: Docker, Postgres, and Full Stack Ownership

Self-hosting a base44 app means running your own Postgres, your own Node or Deno API, your own object storage, and your own auth on a VPS, Kubernetes cluster, or on-prem hardware. Plan ten to sixteen weeks. You gain total control, compliance compatibility, and predictable cost. You take on database operations, backups, monitoring, and on-call. This is the right path for regulated industries, data residency requirements, or teams already operating infrastructure.

Last verified
2026-05-01
Difficulty
HARD
Est. effort
~480h
Target
self-hosted (Docker + Postgres + Node)

Self-hosting is the migration target when you cannot use a managed platform. That is a smaller set of teams than the internet implies, and the trade-offs are real. Read this playbook only if you have already concluded that Supabase, Vercel, Firebase, or another managed offering does not work for your case.

If you are leaving base44 because of vendor lock-in or the production reliability problems but you do not have hard compliance constraints, the Next.js + Supabase path is faster and cheaper. Self-hosting is the right answer when something else forces it.

Why migrate to self-hosted

Four reasons that make self-hosting worth the operational cost:

  1. Compliance. HIPAA, PCI Level 1, SOC 2 with strict data-residency, FedRAMP, GDPR with EU-only storage. Base44 cannot help you here. Most managed platforms can, but at premium tiers. Self-hosted gives you the full audit trail.
  2. Cost predictability at scale. Past roughly 50,000 monthly active users or 100GB of data, managed platforms get expensive fast. A bare-metal Hetzner box at €40/mo handles workloads that cost $1,000+/mo on managed equivalents.
  3. Existing infrastructure. If your company already runs Kubernetes, Postgres, Vault, and Datadog, self-hosting is just "another service." The marginal cost is small.
  4. Full control. You can install any extension, run any cron, attach any sidecar, and audit any line of code. No vendor SLA. No surprise pricing changes. No platform-wide outages outside your control.

The cost: you own database operations, backups, security patches, monitoring, on-call, and capacity planning. If your team does not have those skills, this is not your migration.

What you keep, what you rebuild

LayerWhat you keepWhat you rebuild
React components80–95% (JSX, Tailwind, hooks)Anything calling @base44/sdk
RoutingURL structureRe-implement on Next.js, Remix, or your framework
SchemaField names + types as referenceRecreate as SQL DDL on your Postgres
Database rowsData (export + load)None
AuthenticationUser identifiersSessions (Keycloak / Authelia / custom)
RLSLogicRewrite as Postgres policies or app-layer checks
Backend functionsFunction bodiesDeploy as Node, Deno, or Go services
File uploadsFilesRe-upload to MinIO / S3 / your object store
WebhooksEndpointsNew URLs, update senders
Scheduled jobsNoneBuild with cron, systemd timers, or pg_cron
MonitoringNoneStand up Prometheus + Grafana
BackupsNonepg_dump + WAL archiving
SSLNoneCaddy or Let's Encrypt
DNS + CDNDomainReconfigure; add CloudFlare or Bunny

You take on more than any other migration target. The trade is full control.

Architecture: source vs target

Base44 (current):

[browser] → CSR React (base44 hosted)
              ↓
        @base44/sdk
              ↓
    base44 platform (opaque managed runtime)

Self-hosted (target):

[browser] → Caddy (TLS termination + static)
              ↓
        Next.js / API server (Docker container)
              ↓                 ↓                ↓
        Postgres          MinIO             Keycloak
       (Docker)          (Docker)          (Docker)
              ↓
       pg_dump + WAL → S3 (off-site backups)
              ↓
       Prometheus + Grafana (monitoring)
              ↓
       you operate all of it

For small apps, this fits on one $40 VPS. For serious workloads, split across machines, add a load balancer, replicate the database.

Step-by-step migration plan

Phase 1 — Discovery + infra design (Week 1–2)

1. Pick your hosting target

OptionBest forTrade-off
Single VPS (Hetzner / DO / Linode)Small apps, simple stacksSingle point of failure
Multiple VPS + load balancerMedium apps, basic HAManual capacity planning
Kubernetes (EKS / GKE / AKS)Teams already on k8sOperational complexity
Bare metalCost-sensitive at scaleHardware ops
On-premCompliance forcedEverything ops

Default to Docker Compose on a single Hetzner CCX-series box ($20–$80/mo) for migrations. Add complexity only when you outgrow it.

2. Inventory base44 surface area

Same drill as every migration:

grep -rn "base44\." src/ | tee migration/sdk-calls.txt
wc -l migration/sdk-calls.txt

Plus capture every entity, function, integration, scheduled task, and webhook. See the export code guide for the full inventory template.

3. Design your stack

Concrete starter stack we use most often:

  • App server: Node 20 + Next.js (or Deno + Fresh, or Go + chi)
  • Database: Postgres 16
  • Auth: Keycloak or Authelia (OIDC) or a Node library like Lucia
  • Object storage: MinIO (S3-compatible) or AWS S3 directly
  • Reverse proxy + TLS: Caddy 2 (auto-renews Let's Encrypt)
  • Monitoring: Prometheus + Grafana + Loki for logs
  • Backups: pgBackRest with off-site retention to Backblaze B2 or S3
  • Container runtime: Docker + Docker Compose (k8s for serious scale)

Phase 2 — Local stack (Week 2–3)

4. Stand up Docker Compose

# docker-compose.yml
services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "app"]
      interval: 10s

  minio:
    image: minio/minio
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
    volumes:
      - minio_data:/data

  app:
    build: ./app
    environment:
      DATABASE_URL: postgres://app:${POSTGRES_PASSWORD}@postgres:5432/app
      S3_ENDPOINT: http://minio:9000
      S3_ACCESS_KEY: ${MINIO_ROOT_USER}
      S3_SECRET_KEY: ${MINIO_ROOT_PASSWORD}
    depends_on:
      postgres:
        condition: service_healthy

  caddy:
    image: caddy:2
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data

volumes:
  postgres_data:
  minio_data:
  caddy_data:
# Caddyfile
yourapp.com {
  reverse_proxy app:3000
}

docker compose up and you have a working dev stack. Iterate locally, then deploy the same compose file to production.

5. Recreate the schema

Write your DDL as SQL files under migrations/. Apply with a tool like dbmate, goose, or flyway.

-- migrations/001_init.sql
CREATE TABLE projects (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  owner_id uuid NOT NULL,
  name text NOT NULL,
  status text NOT NULL DEFAULT 'draft',
  created_at timestamptz NOT NULL DEFAULT now()
);
CREATE INDEX projects_owner_id_idx ON projects(owner_id);

6. Stand up auth

Keycloak is the standard for self-hosted OIDC. Add it to your compose file:

  keycloak:
    image: quay.io/keycloak/keycloak:24
    command: start-dev
    environment:
      KEYCLOAK_ADMIN: admin
      KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
      KC_DB: postgres
      KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
      KC_DB_USERNAME: keycloak
      KC_DB_PASSWORD: ${KEYCLOAK_DB_PASSWORD}

Configure a realm, a client for your app, and OIDC login flows. Your Next.js app uses next-auth with the Keycloak provider, or any OIDC library in any language.

For lighter setups, Lucia or Auth.js with email magic links works fine and skips the Keycloak operational overhead.

Phase 3 — Backend rebuild (Weeks 3–6)

7. Port backend functions

Each base44 backend function becomes an API route in your chosen framework. Same pattern as the Vercel migration — replace base44.entities.X calls with raw Postgres queries via pg, postgres.js, Drizzle, or Prisma.

// app/api/create-invoice/route.ts
import { NextResponse } from "next/server";
import { db } from "@/lib/db";
import Stripe from "stripe";

export async function POST(req: Request) {
  const { project_id, amount } = await req.json();
  const stripe = new Stripe(process.env.STRIPE_SECRET!);

  const invoice = await stripe.invoices.create({ customer: "...", auto_advance: true });
  await db.query(
    `INSERT INTO invoices (project_id, stripe_id, amount) VALUES ($1, $2, $3)`,
    [project_id, invoice.id, amount]
  );

  return NextResponse.json({ id: invoice.id });
}

8. Replace SDK calls in components

Same mechanical pass as every migration. Grep, replace, test, repeat.

Phase 4 — Data backfill (Week 6–7)

9. Export from base44, import to your Postgres

// scripts/import-projects.ts
import { Pool } from "pg";
import projects from "./export/projects.json";

const pool = new Pool({ connectionString: process.env.DATABASE_URL });

async function main() {
  for (const batch of chunk(projects, 500)) {
    const values = batch.flatMap((p) => [p.id, p.user_id, p.name, p.status, p.createdAt]);
    const placeholders = batch
      .map((_, i) => `($${i * 5 + 1}, $${i * 5 + 2}, $${i * 5 + 3}, $${i * 5 + 4}, $${i * 5 + 5})`)
      .join(",");
    await pool.query(
      `INSERT INTO projects (id, owner_id, name, status, created_at) VALUES ${placeholders}`,
      values
    );
  }
}

main().catch(console.error);

Validate row counts. Spot-check ten random rows per entity. Re-run if anything diverges.

Phase 5 — Production deploy (Week 7–9)

10. Provision your server

# On the target VPS
apt update && apt upgrade -y
apt install -y docker.io docker-compose-plugin ufw fail2ban
ufw allow 22/tcp && ufw allow 80/tcp && ufw allow 443/tcp && ufw enable

Clone your repo, copy .env, run docker compose up -d. Caddy fetches an SSL cert. Your app is live on the public domain.

11. Set up backups

# /etc/cron.d/postgres-backup
0 3 * * * docker exec postgres pg_dump -U app app | gzip > /backups/app-$(date +\%F).sql.gz
30 3 * * * find /backups -name '*.sql.gz' -mtime +7 -delete
0 4 * * * rclone copy /backups b2:backups

For point-in-time recovery, configure pgBackRest with archiving to S3. Document the restore procedure. Test it monthly.

12. Set up monitoring

# docker-compose.monitoring.yml
services:
  prometheus:
    image: prom/prometheus
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
  grafana:
    image: grafana/grafana
    ports:
      - "3001:3000"
  postgres-exporter:
    image: prometheuscommunity/postgres-exporter
    environment:
      DATA_SOURCE_NAME: "postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/app?sslmode=disable"

Build dashboards for: app request rate, app error rate, p95 latency, Postgres connections, Postgres slow queries, disk usage, memory pressure. Wire alerts to Slack or PagerDuty.

Phase 6 — Cutover (Week 9–10)

13. Dual-run + DNS swap

Same shape as the Vercel cutover: dual-write, validate parity, swap DNS at low-traffic hour. Lock base44 read-only after.

Phase 7 — Steady state (Weeks 10+)

14. Operate

Patch the OS monthly. Patch Postgres minor versions monthly. Major Postgres upgrades annually. Rotate secrets quarterly. Test restores monthly. Review logs weekly. Welcome to ops.

Common pitfalls

1. Skipping backups. The single most common self-hosted disaster. Set up backups before you serve a single user.

2. Running Postgres without monitoring. When the disk fills or the connection pool exhausts, you find out from angry users. Prometheus + alerts catch it before they do.

3. SSL renewal failure. Caddy handles this for you; manually-managed Certbot setups break every 90 days because someone forgot to renew. Use Caddy.

4. Single-host deployment without backups means no DR. If the box dies, you lose everything. Off-site backups are non-negotiable; paid backup-as-a-service is fine if you do not want to manage it yourself.

5. Auth misconfiguration. Keycloak is powerful but easy to misconfigure. Stick with sensible defaults and read the OIDC spec section on token validation. Worth two days to do right.

6. Underestimating ops time. A self-hosted stack takes one to two engineer-days per month of routine maintenance, plus surge time for incidents. Budget it.

7. Over-engineering for "scale we might hit." A single VPS handles 5,000 daily active users without breaking a sweat. Do not start on Kubernetes.

Timeline + team

Ten to sixteen weeks with this team:

  • One senior backend engineer. Owns the rebuild end-to-end. Forty hours per week.
  • One DevOps / SRE engineer. Owns infra, deploy, monitoring, backups. Twenty to forty hours per week.
  • One frontend engineer. Owns the React port. Twenty hours per week.
  • One compliance officer (if regulated). Owns audit prep. Variable.

Smaller team is feasible only if you have a true full-stack senior who has shipped self-hosted before. Otherwise, two people minimum.

Cost

Migration tiers:

TierPriceWhat you get
Small$6,000Single-VPS Docker Compose, simple app, basic backups
Medium$12,000Multi-host, monitoring stack, formal DR runbook
Enterprise$25,000+Kubernetes, HA Postgres, full compliance prep, on-call handoff

Ongoing infra: $50–$200/mo for small apps, $500–$2,000/mo for medium scale, $2,000+ for serious workloads with HA and observability.

DIY: same engineering time as any migration, plus a permanent ops tax of one to two days per month per engineer thereafter.

DIY vs hire decision

DIY this if:

  • Your team has shipped and operated self-hosted services before.
  • You have an SRE or DevOps engineer on staff.
  • The compliance requirements are clear and you have someone who has navigated them.
  • You can spare ten to sixteen weeks of focused time across two to three engineers.

Hire help if:

  • You have never operated Postgres in production.
  • You have compliance requirements with no in-house expertise.
  • You need to be live in under twelve weeks.
  • Your team is base44-only.

We typically partner with internal engineering teams for self-hosted migrations rather than handing off entirely. The handoff itself is the highest-risk step; do it slowly and document everything.

Want a free migration assessment?

We will look at your stack, your compliance constraints, and your existing infrastructure, and tell you whether self-hosting is the right call. Free thirty-minute call.

Book a free migration assessment

QUERIES

Frequently asked questions

Q.01Why would I self-host instead of going to Supabase or Vercel?
A.01

Three reasons: compliance (HIPAA, PCI, GDPR data residency), cost predictability at scale (managed Postgres past 100GB gets expensive), and existing infrastructure expertise. If you already operate Kubernetes or have a sysadmin team, self-hosting often costs less and gives you full control. If you do not, you are signing up for ops work that vendor platforms handle for you.

Q.02What does my self-hosted base44 replacement actually look like?
A.02

A typical stack: Postgres for data, Node or Deno for the API, MinIO or S3-compatible storage for files, Caddy or Nginx for SSL and reverse proxy, Keycloak or Authelia for auth, Prometheus and Grafana for monitoring. Run it all on one VPS for small apps, or Kubernetes for serious workloads. Docker Compose is the right starting point; migrate to k8s when you outgrow it.

Q.03How much does self-hosting cost per month versus base44?
A.03

A single 4GB VPS with Postgres, app server, and storage runs $20–$80/mo on Hetzner, DigitalOcean, or Linode. Add backups, monitoring, and a CDN: $100–$200/mo for small apps. Comparable base44 plan: $50–$500/mo. The savings show up at scale; under 1,000 users, base44 may actually be cheaper once you factor in operations time. Past 10,000 users, self-hosted costs less every time.

Q.04Can I self-host on AWS, GCP, or Azure?
A.04

Yes, and most teams do. Use ECS or EKS on AWS, GKE on GCP, or AKS on Azure. Pair with managed Postgres (RDS, Cloud SQL, Azure Database) and S3-compatible storage. The migration playbook is the same; only the deployment target changes. For pure cost optimization, Hetzner or OVH crush the hyperscalers.

Q.05What about HIPAA, SOC 2, or PCI compliance?
A.05

Self-hosting is the only way to get full compliance with strict regulatory frameworks for many use cases. Base44 is not HIPAA-eligible. Most managed platforms have HIPAA tiers but at significant cost premiums. Self-hosted on AWS GovCloud, Azure Government, or on-prem gives you the full audit trail and data residency control regulators require. Plan for an extra four to eight weeks of compliance work on top of the migration itself.

Q.06How do I handle backups and disaster recovery?
A.06

Postgres logical backups via pg_dump nightly, plus continuous WAL archiving via pgBackRest or wal-g for point-in-time recovery. Store backups in a separate region or provider. Test restores monthly. Document the runbook. This is non-negotiable; self-hosting means the backup strategy is yours, and a missed nightly cron means a lost dataset.

Q.07Should I run Postgres on the same box as my app server?
A.07

For small apps with under 1,000 active users, yes — co-locate Postgres and the API on one VPS for simplicity. Past that, separate them: dedicated database host, dedicated app hosts, load balancer in front. The scaling boundary is roughly 10GB of data or 50 queries per second; once you hit either, split.

NEXT STEP

Plan your migration with engineers who have done it before.

Free 30-minute call. Fixed-price scope after.