You decided to leave base44. Good. Here is what nobody tells you about migrating to a stack you actually own.
This playbook is the one we use internally when a client asks us to move their base44 app to Next.js + Supabase. It is opinionated, technical, and honest about the parts that hurt. If you are still deciding whether to leave at all, read when to leave base44 first, then come back.
Why migrate to Next.js + Supabase
You are leaving base44 for one of three reasons, and all three lead here:
- You want code ownership. Base44's exported code is locked to the base44 SDK. Every database call, auth check, and file upload goes through
@base44/sdk. Even after export, your code does not run anywhere else without a rewrite. This is the vendor lock-in problem and it is the most-cited reason teams leave. - You hit a production ceiling. Rate limits, no SLA, the February 2026 outage, regression loops, and CSR pages invisible to Google all push real apps off the platform.
- You need real-time, websockets, or background jobs. Base44 has none of these. Supabase has all three.
Next.js + Supabase is the destination because it inverts every base44 weakness. Server-side rendering by default. Full SQL. RLS that works the same way as base44's row-level rules. Edge Functions on Deno (the same runtime base44 uses for backend functions, so most function code ports with light edits). Free local development. Deterministic deploys. Real Git.
The trade is real, though. You give up the AI agent, the all-in-one editor, and the prompt-to-prototype speed. If those are your edge, this is not your migration.
What you keep, what you rebuild
Be honest about scope before you start. The frontend mostly carries; the backend always rebuilds.
| Layer | What you keep | What you rebuild |
|---|---|---|
| React components | 80–95% (JSX, Tailwind, hooks) | Anything calling base44.entities.* or base44.functions.* |
| Routing | Page structure | Re-implement on Next.js App Router |
| Schema definitions | Field names + types as reference | Recreate as SQL DDL or Prisma schema |
| Database rows | Yes (export + backfill) | None |
| Authentication | Email addresses, user IDs (as legacy keys) | Sessions, password hashes (force re-issue), OAuth client IDs |
| RLS rules | Logic as pseudocode | Rewrite as Postgres CREATE POLICY |
| Backend functions | Function bodies (Deno code mostly portable) | Wrap as Supabase Edge Functions or Next.js route handlers |
| File uploads | Files themselves (re-upload to Supabase Storage) | Upload paths, signed URLs |
| Webhooks | External webhook URLs | Endpoint code; update sender configs |
| Scheduled jobs | None — base44 has no real cron | Build with Supabase pg_cron or Vercel cron |
| Real-time | None — not supported in base44 | Build with Supabase Realtime |
| Email integration | Templates as text | Re-wire to Resend, Postmark, or SES |
| Third-party integrations | Credentials | OAuth flows, token refresh logic |
Plan for forty percent of the codebase to be rewritten. The frontend illusion of "it's just React" hides the fact that every data fetch is a base44 SDK call.
Architecture: source vs target
Base44 (current):
[browser] → CSR React bundle
↓
@base44/sdk (auth + entities + functions)
↓
base44 platform (Postgres + Deno + Storage + Auth)
↓
you have no idea
Everything runs on base44 infrastructure. You have no SQL access, no logs you control, no way to add a sidecar service, no way to run a cron, and no way to test offline.
Next.js + Supabase (target):
[browser] → Next.js (SSR + RSC + Client Components)
↓
@supabase/ssr + @supabase/supabase-js
↓
Supabase (Postgres + Auth + Storage + Edge Functions + Realtime)
↓
you own the schema, the SQL, the logs, the deploys
You can develop locally with supabase start. You can run migrations with supabase migration new. You can read every query in the dashboard. You can attach Datadog. You can fork your dev DB. You can write SQL.
Step-by-step migration plan
This is the order we run it in. Skip a step at your own risk.
Phase 1 — Discovery (Week 1)
1. Inventory every entity, function, and integration
Open base44 and list every entity, every backend function, every integration, every scheduled prompt, every external webhook. Capture row counts per entity. Capture call volumes per function from the platform metrics if you have them.
# A simple inventory file
mkdir -p migration && cd migration
cat > inventory.md <<'EOF'
## Entities
- users (~12,400 rows)
- projects (~8,200 rows)
- tasks (~94,300 rows)
...
## Functions
- POST /functions/createInvoice (Stripe)
- POST /functions/sendDigest (cron-style)
...
## Integrations
- Stripe (subscriptions)
- Resend (transactional email)
- Slack webhook (alerts)
EOF
Most teams discover thirty to forty percent more surface area than they expected. Do this in week one or you will miss things at cutover.
2. Map every base44 SDK call
Grep your exported codebase for every SDK reference:
grep -rn "base44\." src/ > sdk-calls.txt
wc -l sdk-calls.txt
The line count is your effort estimate. A small app has 50–150 calls. A medium app has 300–600. An enterprise app has 1,000+. Each call is a one-to-twenty minute rewrite.
3. Decide on parallel-run vs hard cutover
Parallel-run means you dual-write from base44 to Supabase for one to two weeks, validate parity, then redirect users. Hard cutover means you stop the world, migrate, and reopen in the new stack.
Choose parallel-run if you have paying users or production data. Choose hard cutover if you have under fifty active users and acceptable downtime. We default to parallel-run.
Phase 2 — Schema (Week 2)
4. Stand up Supabase and mirror the schema
npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app
npm install @supabase/supabase-js @supabase/ssr
npx supabase init
npx supabase start
Translate each base44 entity to a Postgres table. Base44's "fields" map to columns. Be aggressive about converting string fields to proper types (uuid, timestamptz, numeric, text with constraints).
-- supabase/migrations/0001_init.sql
create table public.projects (
id uuid primary key default gen_random_uuid(),
owner_id uuid not null references auth.users(id) on delete cascade,
name text not null,
status text not null default 'draft' check (status in ('draft','active','archived')),
created_at timestamptz not null default now(),
updated_at timestamptz not null default now()
);
create index projects_owner_id_idx on public.projects(owner_id);
5. Rebuild RLS
Base44's row-level rules become Postgres policies. The mental model is the same; the syntax is different.
alter table public.projects enable row level security;
create policy "owner_can_select" on public.projects
for select using (auth.uid() = owner_id);
create policy "owner_can_insert" on public.projects
for insert with check (auth.uid() = owner_id);
create policy "owner_can_update" on public.projects
for update using (auth.uid() = owner_id);
create policy "owner_can_delete" on public.projects
for delete using (auth.uid() = owner_id);
Test every policy with at least two users before moving on. RLS bugs at cutover are the most common cause of post-migration data leaks.
Phase 3 — Auth (Week 3)
6. Re-issue auth
You cannot migrate password hashes from base44; the platform does not expose them. You have two choices:
- Force a password reset on every user. Send a one-time email with a magic link to set a new password on Supabase. This is what we do ninety percent of the time.
- Keep base44 sessions live during dual-run. Validate base44 JWT in your Next.js middleware as a fallback while users gradually migrate. Cut off after thirty days.
For OAuth users (Google, GitHub), point Supabase Auth at the same OAuth client IDs and they re-link by email automatically.
// app/auth/migrate/route.ts
import { createClient } from "@/lib/supabase/server";
import { NextResponse } from "next/server";
export async function POST(req: Request) {
const { email } = await req.json();
const supabase = createClient();
const { error } = await supabase.auth.admin.inviteUserByEmail(email, {
data: { migrated_from: "base44" },
});
if (error) return NextResponse.json({ error: error.message }, { status: 400 });
return NextResponse.json({ ok: true });
}
Phase 4 — Data backfill (Week 4)
7. Export from base44, transform, load to Supabase
Use base44's export endpoint or the SDK to dump every entity to JSON. Transform to match your new schema. Load via Supabase service-role key.
// scripts/backfill-projects.ts
import { createClient } from "@supabase/supabase-js";
import projects from "./export/projects.json";
const sb = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_SERVICE_ROLE!);
for (const batch of chunk(projects, 500)) {
const { error } = await sb.from("projects").insert(
batch.map((p) => ({
id: p.id,
owner_id: p.user_id,
name: p.name,
status: p.status ?? "draft",
created_at: p.createdAt,
}))
);
if (error) {
console.error("batch failed", error);
process.exit(1);
}
}
Run on staging first. Validate row counts match. Spot-check ten random rows per entity.
Phase 5 — API rebuild (Weeks 5–7)
8. Port backend functions to Supabase Edge Functions
Base44 backend functions run on Deno. Supabase Edge Functions also run on Deno. Most function bodies port with two changes: replace base44.entities.X.create() with a Supabase client call, and read secrets from Deno.env.
// supabase/functions/create-invoice/index.ts
import { createClient } from "jsr:@supabase/supabase-js@2";
import Stripe from "npm:stripe@14";
Deno.serve(async (req) => {
const { project_id, amount } = await req.json();
const sb = createClient(Deno.env.get("SUPABASE_URL")!, Deno.env.get("SUPABASE_SERVICE_ROLE")!);
const stripe = new Stripe(Deno.env.get("STRIPE_SECRET")!);
const invoice = await stripe.invoices.create({ customer: "...", auto_advance: true });
await sb.from("invoices").insert({ project_id, stripe_id: invoice.id, amount });
return new Response(JSON.stringify({ id: invoice.id }), {
headers: { "content-type": "application/json" },
});
});
Deploy with supabase functions deploy create-invoice. Every function is independently deployable, version-controlled, and locally testable. This alone is worth the migration.
9. Replace base44.entities.* calls in the frontend
The mechanical work. For every base44.entities.X.find(), replace with a Supabase query.
// before
const projects = await base44.entities.Project.find({ filter: { ownerId: user.id } });
// after
const { data: projects } = await supabase
.from("projects")
.select("*")
.eq("owner_id", user.id);
Do this in batches by route. Ship behind a feature flag. Compare outputs against the base44 production data. Difference of zero, you can flip the flag.
Phase 6 — Cutover (Week 8)
10. Dual-write window
For ten to fourteen days, every write hits both base44 and Supabase. Reads still come from base44. Compare row counts daily.
async function createProject(payload) {
const [b44, sb] = await Promise.allSettled([
base44.entities.Project.create(payload),
supabase.from("projects").insert(payload),
]);
if (b44.status === "rejected" || sb.status === "rejected") {
await alertSlack("dual-write divergence", { b44, sb });
}
return b44.value; // base44 still source of truth
}
11. Read cutover
Flip read traffic to Supabase one entity at a time. Watch error rates. Roll back at any sign of trouble. Once all reads are on Supabase and stable for forty-eight hours, you are ready for write cutover.
12. Write cutover and freeze
Stop dual-writing. All traffic on Supabase. Lock the base44 app to read-only. Take a final snapshot of base44 data and store it in cold storage.
Phase 7 — Sunset (Weeks 9–10)
13. Validate, monitor, decommission
Run every business workflow end-to-end on the new stack. Compare aggregate metrics (signups, MRR, retention) against the prior week. If everything looks normal for two weeks, cancel the base44 plan.
Keep the read-only base44 export for ninety days. We have seen exactly two cases in three years where a customer needed to recover something. Both times, the export saved the migration.
Common pitfalls
1. Underestimating SDK call volume. You think you have 200. You actually have 480. Grep first.
2. Forgetting RLS on backfill. If you load with the service-role key, RLS is bypassed. Re-enable and test policies before opening reads to users.
3. Auth re-issue surprises. Users who never log in during the dual-run window get locked out at write cutover. Send a "your account is moving" email two weeks before.
4. Missed scheduled jobs. Base44 has no real cron, but you may have built one with frontend timers or external services like Cron-job.org. Document them all and rebuild on Supabase pg_cron or Vercel cron.
5. Realtime expectations. Once you migrate, users will ask for real-time updates because Supabase makes it trivial. Decide upfront whether you are scoping real-time in or out, or scope creep eats your timeline.
6. Stripe webhook endpoints. Update the endpoint URL in your Stripe dashboard at write cutover. Forget this and subscription renewals fail silently for hours. Same for any webhook from external systems.
7. SEO regression. Base44 was CSR-only, so your Google index was probably weak. Next.js with SSR and proper metadata fixes this, but URL structure changes can wipe your existing rankings. Map every old URL to a new one and ship 301 redirects on day one.
Timeline + team
A typical migration runs eight to twelve weeks with this team:
- One senior full-stack engineer. Owns the rebuild end-to-end. Forty hours per week.
- One part-time DBA or backend specialist. Owns schema, RLS, and backfill. Ten hours per week.
- One product owner from your side. Validates feature parity. Five hours per week.
Smaller team works for smaller apps. Two weeks of one engineer can move a tiny CRUD app. Beyond medium scope, do not try this with one part-time engineer; you will burn months and ship nothing.
Cost
Pricing tiers from our migration practice:
| Tier | Price | What you get |
|---|---|---|
| Small | $6,000 | Apps with under 20 entities, standard email auth, no custom integrations |
| Medium | $12,000 | Apps with custom integrations, non-trivial RLS, scheduled jobs |
| Enterprise | $25,000+ | Compliance constraints, complex roles, 50,000+ active users, custom SLA |
Each tier is fixed-price after a free thirty-minute discovery call and a paid four-hundred-ninety-seven-dollar audit (refunded if you go ahead). Includes schema design, backfill scripts, dual-run setup, cutover, and four weeks of support.
DIY costs nothing in cash. It costs one senior engineer eight to twelve weeks of focused time, which is fifty to one hundred and twenty thousand dollars in fully-loaded salary. Either way, you are spending money. Decide which currency you have more of.
DIY vs hire decision
DIY this if:
- Your app has under twenty entities and standard auth.
- You have a senior engineer with React, Next.js, and Postgres experience already on the team.
- You can spare eight weeks of their focused time without missing other commitments.
- You do not have paying users who will churn during downtime.
Hire help if:
- Your app has over fifty entities, custom integrations, or complex RLS.
- You are running on revenue that depends on the app being live.
- Your team is full-stack JavaScript but has no Postgres or RLS experience.
- You tried the migration once and got stuck.
The most expensive migration is one you start, abandon, and restart six months later. That is a real pattern we see roughly every other quarter.
Want a free migration assessment?
We will look at your base44 app, count your SDK calls, identify the risk hotspots, and give you a fixed-price scope. Free thirty-minute call.
Book a free migration assessment
Related migrations
- Base44 to Vercel — frontend-first migration when you want to keep base44 as the data layer for now.
- Base44 export code guide — how to actually use the GitHub export feature, what works, what does not.
- When to leave base44 — the decision framework if you are not yet sure migration is the right call.