BASE44DEVS

MIGRATION · BASE44 NEXT.JS + SUPABASE

Migrate Base44 to Next.js + Supabase: A Production Playbook

Migrating a base44 app to Next.js + Supabase takes eight to twelve weeks for a typical production app. The frontend usually carries over with minor changes, but every base44 SDK call has to be rewritten against Supabase's Postgres, Auth, Storage, and Edge Functions. Plan for schema export, RLS rebuild, auth re-issue, and a parallel-run cutover. Budget six to twelve thousand dollars if you hire it out.

Last verified
2026-05-01
Difficulty
HARD
Est. effort
~360h
Target
Next.js + Supabase

You decided to leave base44. Good. Here is what nobody tells you about migrating to a stack you actually own.

This playbook is the one we use internally when a client asks us to move their base44 app to Next.js + Supabase. It is opinionated, technical, and honest about the parts that hurt. If you are still deciding whether to leave at all, read when to leave base44 first, then come back.

Why migrate to Next.js + Supabase

You are leaving base44 for one of three reasons, and all three lead here:

  1. You want code ownership. Base44's exported code is locked to the base44 SDK. Every database call, auth check, and file upload goes through @base44/sdk. Even after export, your code does not run anywhere else without a rewrite. This is the vendor lock-in problem and it is the most-cited reason teams leave.
  2. You hit a production ceiling. Rate limits, no SLA, the February 2026 outage, regression loops, and CSR pages invisible to Google all push real apps off the platform.
  3. You need real-time, websockets, or background jobs. Base44 has none of these. Supabase has all three.

Next.js + Supabase is the destination because it inverts every base44 weakness. Server-side rendering by default. Full SQL. RLS that works the same way as base44's row-level rules. Edge Functions on Deno (the same runtime base44 uses for backend functions, so most function code ports with light edits). Free local development. Deterministic deploys. Real Git.

The trade is real, though. You give up the AI agent, the all-in-one editor, and the prompt-to-prototype speed. If those are your edge, this is not your migration.

What you keep, what you rebuild

Be honest about scope before you start. The frontend mostly carries; the backend always rebuilds.

LayerWhat you keepWhat you rebuild
React components80–95% (JSX, Tailwind, hooks)Anything calling base44.entities.* or base44.functions.*
RoutingPage structureRe-implement on Next.js App Router
Schema definitionsField names + types as referenceRecreate as SQL DDL or Prisma schema
Database rowsYes (export + backfill)None
AuthenticationEmail addresses, user IDs (as legacy keys)Sessions, password hashes (force re-issue), OAuth client IDs
RLS rulesLogic as pseudocodeRewrite as Postgres CREATE POLICY
Backend functionsFunction bodies (Deno code mostly portable)Wrap as Supabase Edge Functions or Next.js route handlers
File uploadsFiles themselves (re-upload to Supabase Storage)Upload paths, signed URLs
WebhooksExternal webhook URLsEndpoint code; update sender configs
Scheduled jobsNone — base44 has no real cronBuild with Supabase pg_cron or Vercel cron
Real-timeNone — not supported in base44Build with Supabase Realtime
Email integrationTemplates as textRe-wire to Resend, Postmark, or SES
Third-party integrationsCredentialsOAuth flows, token refresh logic

Plan for forty percent of the codebase to be rewritten. The frontend illusion of "it's just React" hides the fact that every data fetch is a base44 SDK call.

Architecture: source vs target

Base44 (current):

[browser] → CSR React bundle
              ↓
        @base44/sdk (auth + entities + functions)
              ↓
     base44 platform (Postgres + Deno + Storage + Auth)
              ↓
            you have no idea

Everything runs on base44 infrastructure. You have no SQL access, no logs you control, no way to add a sidecar service, no way to run a cron, and no way to test offline.

Next.js + Supabase (target):

[browser] → Next.js (SSR + RSC + Client Components)
              ↓
       @supabase/ssr + @supabase/supabase-js
              ↓
   Supabase (Postgres + Auth + Storage + Edge Functions + Realtime)
              ↓
       you own the schema, the SQL, the logs, the deploys

You can develop locally with supabase start. You can run migrations with supabase migration new. You can read every query in the dashboard. You can attach Datadog. You can fork your dev DB. You can write SQL.

Step-by-step migration plan

This is the order we run it in. Skip a step at your own risk.

Phase 1 — Discovery (Week 1)

1. Inventory every entity, function, and integration

Open base44 and list every entity, every backend function, every integration, every scheduled prompt, every external webhook. Capture row counts per entity. Capture call volumes per function from the platform metrics if you have them.

# A simple inventory file
mkdir -p migration && cd migration
cat > inventory.md <<'EOF'
## Entities
- users (~12,400 rows)
- projects (~8,200 rows)
- tasks (~94,300 rows)
...

## Functions
- POST /functions/createInvoice (Stripe)
- POST /functions/sendDigest (cron-style)
...

## Integrations
- Stripe (subscriptions)
- Resend (transactional email)
- Slack webhook (alerts)
EOF

Most teams discover thirty to forty percent more surface area than they expected. Do this in week one or you will miss things at cutover.

2. Map every base44 SDK call

Grep your exported codebase for every SDK reference:

grep -rn "base44\." src/ > sdk-calls.txt
wc -l sdk-calls.txt

The line count is your effort estimate. A small app has 50–150 calls. A medium app has 300–600. An enterprise app has 1,000+. Each call is a one-to-twenty minute rewrite.

3. Decide on parallel-run vs hard cutover

Parallel-run means you dual-write from base44 to Supabase for one to two weeks, validate parity, then redirect users. Hard cutover means you stop the world, migrate, and reopen in the new stack.

Choose parallel-run if you have paying users or production data. Choose hard cutover if you have under fifty active users and acceptable downtime. We default to parallel-run.

Phase 2 — Schema (Week 2)

4. Stand up Supabase and mirror the schema

npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app
npm install @supabase/supabase-js @supabase/ssr
npx supabase init
npx supabase start

Translate each base44 entity to a Postgres table. Base44's "fields" map to columns. Be aggressive about converting string fields to proper types (uuid, timestamptz, numeric, text with constraints).

-- supabase/migrations/0001_init.sql
create table public.projects (
  id uuid primary key default gen_random_uuid(),
  owner_id uuid not null references auth.users(id) on delete cascade,
  name text not null,
  status text not null default 'draft' check (status in ('draft','active','archived')),
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create index projects_owner_id_idx on public.projects(owner_id);

5. Rebuild RLS

Base44's row-level rules become Postgres policies. The mental model is the same; the syntax is different.

alter table public.projects enable row level security;

create policy "owner_can_select" on public.projects
  for select using (auth.uid() = owner_id);

create policy "owner_can_insert" on public.projects
  for insert with check (auth.uid() = owner_id);

create policy "owner_can_update" on public.projects
  for update using (auth.uid() = owner_id);

create policy "owner_can_delete" on public.projects
  for delete using (auth.uid() = owner_id);

Test every policy with at least two users before moving on. RLS bugs at cutover are the most common cause of post-migration data leaks.

Phase 3 — Auth (Week 3)

6. Re-issue auth

You cannot migrate password hashes from base44; the platform does not expose them. You have two choices:

  1. Force a password reset on every user. Send a one-time email with a magic link to set a new password on Supabase. This is what we do ninety percent of the time.
  2. Keep base44 sessions live during dual-run. Validate base44 JWT in your Next.js middleware as a fallback while users gradually migrate. Cut off after thirty days.

For OAuth users (Google, GitHub), point Supabase Auth at the same OAuth client IDs and they re-link by email automatically.

// app/auth/migrate/route.ts
import { createClient } from "@/lib/supabase/server";
import { NextResponse } from "next/server";

export async function POST(req: Request) {
  const { email } = await req.json();
  const supabase = createClient();
  const { error } = await supabase.auth.admin.inviteUserByEmail(email, {
    data: { migrated_from: "base44" },
  });
  if (error) return NextResponse.json({ error: error.message }, { status: 400 });
  return NextResponse.json({ ok: true });
}

Phase 4 — Data backfill (Week 4)

7. Export from base44, transform, load to Supabase

Use base44's export endpoint or the SDK to dump every entity to JSON. Transform to match your new schema. Load via Supabase service-role key.

// scripts/backfill-projects.ts
import { createClient } from "@supabase/supabase-js";
import projects from "./export/projects.json";

const sb = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_SERVICE_ROLE!);

for (const batch of chunk(projects, 500)) {
  const { error } = await sb.from("projects").insert(
    batch.map((p) => ({
      id: p.id,
      owner_id: p.user_id,
      name: p.name,
      status: p.status ?? "draft",
      created_at: p.createdAt,
    }))
  );
  if (error) {
    console.error("batch failed", error);
    process.exit(1);
  }
}

Run on staging first. Validate row counts match. Spot-check ten random rows per entity.

Phase 5 — API rebuild (Weeks 5–7)

8. Port backend functions to Supabase Edge Functions

Base44 backend functions run on Deno. Supabase Edge Functions also run on Deno. Most function bodies port with two changes: replace base44.entities.X.create() with a Supabase client call, and read secrets from Deno.env.

// supabase/functions/create-invoice/index.ts
import { createClient } from "jsr:@supabase/supabase-js@2";
import Stripe from "npm:stripe@14";

Deno.serve(async (req) => {
  const { project_id, amount } = await req.json();
  const sb = createClient(Deno.env.get("SUPABASE_URL")!, Deno.env.get("SUPABASE_SERVICE_ROLE")!);
  const stripe = new Stripe(Deno.env.get("STRIPE_SECRET")!);

  const invoice = await stripe.invoices.create({ customer: "...", auto_advance: true });
  await sb.from("invoices").insert({ project_id, stripe_id: invoice.id, amount });
  return new Response(JSON.stringify({ id: invoice.id }), {
    headers: { "content-type": "application/json" },
  });
});

Deploy with supabase functions deploy create-invoice. Every function is independently deployable, version-controlled, and locally testable. This alone is worth the migration.

9. Replace base44.entities.* calls in the frontend

The mechanical work. For every base44.entities.X.find(), replace with a Supabase query.

// before
const projects = await base44.entities.Project.find({ filter: { ownerId: user.id } });

// after
const { data: projects } = await supabase
  .from("projects")
  .select("*")
  .eq("owner_id", user.id);

Do this in batches by route. Ship behind a feature flag. Compare outputs against the base44 production data. Difference of zero, you can flip the flag.

Phase 6 — Cutover (Week 8)

10. Dual-write window

For ten to fourteen days, every write hits both base44 and Supabase. Reads still come from base44. Compare row counts daily.

async function createProject(payload) {
  const [b44, sb] = await Promise.allSettled([
    base44.entities.Project.create(payload),
    supabase.from("projects").insert(payload),
  ]);
  if (b44.status === "rejected" || sb.status === "rejected") {
    await alertSlack("dual-write divergence", { b44, sb });
  }
  return b44.value; // base44 still source of truth
}

11. Read cutover

Flip read traffic to Supabase one entity at a time. Watch error rates. Roll back at any sign of trouble. Once all reads are on Supabase and stable for forty-eight hours, you are ready for write cutover.

12. Write cutover and freeze

Stop dual-writing. All traffic on Supabase. Lock the base44 app to read-only. Take a final snapshot of base44 data and store it in cold storage.

Phase 7 — Sunset (Weeks 9–10)

13. Validate, monitor, decommission

Run every business workflow end-to-end on the new stack. Compare aggregate metrics (signups, MRR, retention) against the prior week. If everything looks normal for two weeks, cancel the base44 plan.

Keep the read-only base44 export for ninety days. We have seen exactly two cases in three years where a customer needed to recover something. Both times, the export saved the migration.

Common pitfalls

1. Underestimating SDK call volume. You think you have 200. You actually have 480. Grep first.

2. Forgetting RLS on backfill. If you load with the service-role key, RLS is bypassed. Re-enable and test policies before opening reads to users.

3. Auth re-issue surprises. Users who never log in during the dual-run window get locked out at write cutover. Send a "your account is moving" email two weeks before.

4. Missed scheduled jobs. Base44 has no real cron, but you may have built one with frontend timers or external services like Cron-job.org. Document them all and rebuild on Supabase pg_cron or Vercel cron.

5. Realtime expectations. Once you migrate, users will ask for real-time updates because Supabase makes it trivial. Decide upfront whether you are scoping real-time in or out, or scope creep eats your timeline.

6. Stripe webhook endpoints. Update the endpoint URL in your Stripe dashboard at write cutover. Forget this and subscription renewals fail silently for hours. Same for any webhook from external systems.

7. SEO regression. Base44 was CSR-only, so your Google index was probably weak. Next.js with SSR and proper metadata fixes this, but URL structure changes can wipe your existing rankings. Map every old URL to a new one and ship 301 redirects on day one.

Timeline + team

A typical migration runs eight to twelve weeks with this team:

  • One senior full-stack engineer. Owns the rebuild end-to-end. Forty hours per week.
  • One part-time DBA or backend specialist. Owns schema, RLS, and backfill. Ten hours per week.
  • One product owner from your side. Validates feature parity. Five hours per week.

Smaller team works for smaller apps. Two weeks of one engineer can move a tiny CRUD app. Beyond medium scope, do not try this with one part-time engineer; you will burn months and ship nothing.

Cost

Pricing tiers from our migration practice:

TierPriceWhat you get
Small$6,000Apps with under 20 entities, standard email auth, no custom integrations
Medium$12,000Apps with custom integrations, non-trivial RLS, scheduled jobs
Enterprise$25,000+Compliance constraints, complex roles, 50,000+ active users, custom SLA

Each tier is fixed-price after a free thirty-minute discovery call and a paid four-hundred-ninety-seven-dollar audit (refunded if you go ahead). Includes schema design, backfill scripts, dual-run setup, cutover, and four weeks of support.

DIY costs nothing in cash. It costs one senior engineer eight to twelve weeks of focused time, which is fifty to one hundred and twenty thousand dollars in fully-loaded salary. Either way, you are spending money. Decide which currency you have more of.

DIY vs hire decision

DIY this if:

  • Your app has under twenty entities and standard auth.
  • You have a senior engineer with React, Next.js, and Postgres experience already on the team.
  • You can spare eight weeks of their focused time without missing other commitments.
  • You do not have paying users who will churn during downtime.

Hire help if:

  • Your app has over fifty entities, custom integrations, or complex RLS.
  • You are running on revenue that depends on the app being live.
  • Your team is full-stack JavaScript but has no Postgres or RLS experience.
  • You tried the migration once and got stuck.

The most expensive migration is one you start, abandon, and restart six months later. That is a real pattern we see roughly every other quarter.

Want a free migration assessment?

We will look at your base44 app, count your SDK calls, identify the risk hotspots, and give you a fixed-price scope. Free thirty-minute call.

Book a free migration assessment

  • Base44 to Vercel — frontend-first migration when you want to keep base44 as the data layer for now.
  • Base44 export code guide — how to actually use the GitHub export feature, what works, what does not.
  • When to leave base44 — the decision framework if you are not yet sure migration is the right call.

QUERIES

Frequently asked questions

Q.01How long does it actually take to migrate a base44 app to Next.js and Supabase?
A.01

Eight to twelve weeks for a single production app with one to two engineers. A small CRUD app with a dozen entities and email auth is closer to four to six weeks. An app with custom integrations, complex RLS rules, scheduled jobs, and external webhooks is twelve to sixteen weeks. The variable that matters most is how many base44 SDK calls live in your frontend; every one of them has to be rewritten.

Q.02Will my exported base44 code work as-is on Next.js?
A.02

No. The exported code is React + Tailwind which mostly carries over, but every component that calls the base44 SDK is broken until you rewrite the data layer. The frontend folder is reusable. The backend folder, schema definitions, and auth integration all need replacement. Estimate roughly forty percent of the codebase has to be rebuilt, and ninety-five percent of the data-access layer.

Q.03Can I migrate the database without downtime?
A.03

Yes, with a dual-write window. Stand up Supabase, mirror the schema, backfill historical data, then dual-write from base44 to Supabase for one to two weeks while you validate. Cut over reads first, then writes. Total downtime can be under five minutes if you do this carefully. Most teams accept a fifteen-to-thirty-minute maintenance window because it is simpler.

Q.04What does Supabase replace in base44?
A.04

Supabase Postgres replaces base44's database. Supabase Auth replaces base44 Auth, including OAuth and magic links. Supabase Storage replaces base44 file storage. Supabase Edge Functions (Deno) replace base44 backend functions, with the bonus that you can run them locally. Realtime replaces nothing in base44 because base44 has no real-time support; this is usually the upgrade users want most.

Q.05How much does this migration cost if I hire it out?
A.05

Our small-migration tier is six thousand dollars for apps under twenty entities and standard auth. Medium is twelve thousand for apps with custom integrations or non-trivial RLS. Enterprise starts at twenty-five thousand for apps with regulatory constraints, complex roles, or migration of more than fifty thousand active users. DIY is free in cash but costs an engineer two to three months of focused time.

Q.06What do I lose by leaving base44?
A.06

You lose the AI agent that generates code from prompts, the all-in-one editor, and the platform-managed deploys. You gain code ownership, deterministic deploys, real version control, the ability to scale without rate limits, real-time data, and full SQL. Most teams who finish the migration say the trade is obvious within the first month.

Q.07Should I migrate to Supabase or build my own Postgres?
A.07

Supabase if you want fast wins on auth, storage, RLS, and edge functions without operating infrastructure. Self-hosted Postgres if you have compliance constraints (data residency, on-prem) or already run Kubernetes. For ninety percent of base44 apps, Supabase is the right answer. See our self-hosted playbook if Supabase does not fit.

NEXT STEP

Plan your migration with engineers who have done it before.

Free 30-minute call. Fixed-price scope after.