Firebase is the Google-shaped alternative to Supabase for a base44 migration. The trade-off matrix is different but the work is similar in scope. This playbook covers the Firebase-specific decisions and the migration path.
If you do not have a specific reason to pick Firebase (existing Google Cloud commitments, native mobile parity, or strong preference for document data), Supabase is usually the simpler migration. Read this one if Firebase is the right call for your team.
Why migrate to Firebase
Three reasons that justify Firebase over alternatives:
- Native mobile SDKs. Firebase has first-class iOS and Android SDKs that work the same way as the web SDK. If you have or plan to have native mobile clients, Firebase keeps the data layer consistent across all three platforms. Supabase is web-first; mobile support is improving but lags Firebase.
- Real-time built in. Firestore's
onSnapshotlisteners give you live updates with zero infrastructure work. Base44 has no real-time support; this is one of the visible improvements users notice immediately. - Google Cloud integration. If your team is already on GCP for compute, BigQuery, or Vertex AI, Firebase fits naturally. Cloud Functions can call other GCP services with native auth. Firestore exports to BigQuery for analytics.
The trade: Firestore is document-based, not relational. If your base44 schema relies heavily on joins, foreign keys, or transactions across entities, the data model conversion is significant work. You will denormalize. You will think differently. The migration is more architectural than mechanical.
What you keep, what you rebuild
| Layer | What you keep | What you rebuild |
|---|---|---|
| React components | 80–95% | SDK calls (@base44/sdk → Firebase SDK) |
| Routing | URL structure | Re-implement on Next.js or Vite |
| Schema definitions | Field names + types as reference | Redesign as Firestore collections (denormalize) |
| Database rows | Data | None |
| Authentication | User emails | Firebase Auth (force password reset) |
| Permissions | Logic | Firestore Security Rules |
| Backend functions | Function bodies | Wrap as Cloud Functions for Firebase |
| File uploads | Files | Re-upload to Firebase Storage |
| Webhooks | Endpoint URLs | New URLs at Cloud Functions HTTPS triggers |
| Scheduled jobs | None | Cloud Scheduler + Cloud Functions |
| Real-time | None | Free with Firestore listeners |
Schema redesign is the highest-effort step. Plan for it.
Architecture: source vs target
Base44 (current):
[browser] → CSR React (base44 hosted)
↓
@base44/sdk
↓
base44 backend (managed, opaque)
Firebase (target):
[browser] → Firebase Hosting (CDN-served React/Next.js)
↓
Firebase Auth ← user identity
↓
Firestore (document DB) ← data
↓
Cloud Functions ← server-side logic
↓
Firebase Storage ← files
↓
Cloud Scheduler ← cron
↓
BigQuery ← analytics (optional)
Everything is managed by Google. You do not run servers. You configure and deploy.
Step-by-step migration plan
Phase 1 — Discovery + schema redesign (Weeks 1–2)
1. Inventory base44 surface area
Standard inventory: entities, functions, integrations, scheduled tasks, webhooks. Grep your SDK calls.
grep -rn "base44\." src/ | tee migration/sdk-calls.txt
2. Redesign your schema for Firestore
This is the biggest difference from a Supabase migration. Firestore is document-based; you cannot do native joins. You denormalize.
For example, base44's relational shape:
projects: { id, owner_id, name, status }
tasks: { id, project_id, title, assigned_to_id }
users: { id, email, name }
Becomes Firestore collections:
/users/{userId}: { email, name }
/projects/{projectId}: {
ownerId, name, status,
ownerName // denormalized from users for list views
}
/projects/{projectId}/tasks/{taskId}: {
title, assignedToId, assignedToName // denormalized
}
Three principles:
- Denormalize for read efficiency. If you display project names in a task list, store
projectNameon each task document. - Subcollections for ownership hierarchies. Tasks live under their parent project as a subcollection if tasks always belong to one project.
- Index every query. Firestore charges per document read; design queries upfront and create composite indexes.
This redesign is one to two weeks of careful thinking. Do not skip it.
Phase 2 — Firebase project setup (Week 2)
3. Create the Firebase project
npm install -g firebase-tools
firebase login
firebase init
Select Firestore, Functions, Hosting, Storage, and Auth during init. Enable Firestore in production mode (Security Rules locked by default, then opened explicitly). Enable Auth providers you need (email/password, Google, etc.).
4. Configure Firebase in your Next.js app
npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app
npm install firebase firebase-admin
// lib/firebase.ts
import { initializeApp, getApps } from "firebase/app";
import { getFirestore } from "firebase/firestore";
import { getAuth } from "firebase/auth";
const config = {
apiKey: process.env.NEXT_PUBLIC_FIREBASE_API_KEY!,
authDomain: process.env.NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN!,
projectId: process.env.NEXT_PUBLIC_FIREBASE_PROJECT_ID!,
};
export const app = getApps()[0] ?? initializeApp(config);
export const db = getFirestore(app);
export const auth = getAuth(app);
Phase 3 — Schema deploy + Security Rules (Week 3)
5. Apply Security Rules
Firestore Security Rules are the RLS equivalent.
// firestore.rules
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /projects/{projectId} {
allow read: if request.auth != null && resource.data.ownerId == request.auth.uid;
allow create: if request.auth != null && request.resource.data.ownerId == request.auth.uid;
allow update, delete: if request.auth != null && resource.data.ownerId == request.auth.uid;
match /tasks/{taskId} {
allow read, write: if request.auth != null
&& get(/databases/$(database)/documents/projects/$(projectId)).data.ownerId == request.auth.uid;
}
}
}
}
Deploy:
firebase deploy --only firestore:rules
Test with the Firebase emulator before going to production. Rule bugs are the most common cause of post-migration data leaks.
Phase 4 — Auth migration (Week 4)
6. Bulk import users
// scripts/import-users.ts
import * as admin from "firebase-admin";
admin.initializeApp({ credential: admin.credential.applicationDefault() });
const users = [
// exported from base44
{ uid: "u_001", email: "alice@example.com", displayName: "Alice", emailVerified: true },
];
await admin.auth().importUsers(users, {
hash: { algorithm: "STANDARD_SCRYPT" },
});
For password-less migration, omit the password hash field. Send a password-reset email to every user via admin.auth().generatePasswordResetLink(email).
For OAuth users, configure Google/Apple/Facebook providers in the Firebase Auth console with the same OAuth client IDs. Users re-link by email on first login.
Phase 5 — Backend rebuild (Weeks 4–6)
7. Port backend functions to Cloud Functions
Each base44 function becomes a Cloud Function for Firebase.
// functions/src/index.ts
import { onRequest } from "firebase-functions/v2/https";
import { onSchedule } from "firebase-functions/v2/scheduler";
import { initializeApp } from "firebase-admin/app";
import { getFirestore } from "firebase-admin/firestore";
import Stripe from "stripe";
initializeApp();
const db = getFirestore();
export const createInvoice = onRequest(async (req, res) => {
const { projectId, amount } = req.body;
const stripe = new Stripe(process.env.STRIPE_SECRET!);
const invoice = await stripe.invoices.create({ customer: "...", auto_advance: true });
await db.collection("invoices").add({ projectId, stripeId: invoice.id, amount });
res.json({ id: invoice.id });
});
export const dailyDigest = onSchedule("every day 09:00", async () => {
// scheduled job
});
Deploy:
firebase deploy --only functions
Functions auto-scale. They cold-start (100–500ms) on first invocation after idle. For latency-sensitive paths, use Cloud Run instead with min-instances configured.
8. Replace SDK calls in components
Same mechanical work as every migration:
// before
const projects = await base44.entities.Project.find({ filter: { ownerId: user.uid } });
// after
import { collection, query, where, getDocs } from "firebase/firestore";
import { db } from "@/lib/firebase";
const q = query(collection(db, "projects"), where("ownerId", "==", user.uid));
const snap = await getDocs(q);
const projects = snap.docs.map((d) => ({ id: d.id, ...d.data() }));
For real-time, use onSnapshot instead of getDocs:
import { onSnapshot } from "firebase/firestore";
const unsub = onSnapshot(q, (snap) => {
setProjects(snap.docs.map((d) => ({ id: d.id, ...d.data() })));
});
return () => unsub();
This is the upgrade users notice immediately.
Phase 6 — Data backfill (Week 6)
9. Export from base44, transform, write to Firestore
// scripts/import-projects.ts
import * as admin from "firebase-admin";
import projects from "./export/projects.json";
admin.initializeApp();
const db = admin.firestore();
for (const p of projects) {
// denormalize: fetch owner name
const owner = await db.collection("users").doc(p.user_id).get();
await db.collection("projects").doc(p.id).set({
ownerId: p.user_id,
ownerName: owner.data()?.name ?? "",
name: p.name,
status: p.status ?? "draft",
createdAt: admin.firestore.Timestamp.fromDate(new Date(p.createdAt)),
});
}
For large datasets, batch writes (max 500 per batch):
const batch = db.batch();
for (const p of chunk) {
batch.set(db.collection("projects").doc(p.id), toRow(p));
}
await batch.commit();
Validate row counts. Spot-check.
Phase 7 — Deploy + cutover (Weeks 7–8)
10. Deploy to Firebase Hosting
npm run build
firebase deploy --only hosting
Add custom domain in the Firebase Hosting console. SSL is automatic. The site is served from Google's CDN.
11. Dual-run + DNS swap
Standard cutover. Dual-write from base44 to Firestore for one to two weeks. Validate parity. Swap DNS at low-traffic hour.
Phase 8 — Sunset
12. Decommission base44
Cancel after thirty days of stable Firebase production. Keep base44 export for ninety days.
Common pitfalls
1. Treating Firestore like Postgres. Joins do not work. Foreign keys do not work. Transactions across collections work but with constraints. Plan denormalization upfront or you fight the database for the entire migration.
2. Firestore read costs. Every document read costs money. A poorly-designed list view can read 10,000 documents per page load and eat your budget. Always paginate. Always index. Profile your reads in the Firebase console.
3. Security Rules misconfigured. The "production mode" default locks everything. The "test mode" default is open to the world. Pick the locked default and open paths explicitly. Test with the emulator.
4. Cloud Function cold starts. First invocation after idle is 100–500ms slower. For user-facing latency-sensitive paths, set minInstances: 1 to keep one warm, or move to Cloud Run.
5. Data export from base44 doesn't fit Firestore shapes. You cannot import relational data directly. Plan the transform script as part of the migration, not as an afterthought.
6. Storage rules forgotten. Firebase Storage has its own rules separate from Firestore. Configure both.
7. SEO regression. Firebase Hosting serves static files and supports SSR with Cloud Functions or Cloud Run. If you want full SSR, configure properly. If you just deploy the SPA bundle, you keep base44's CSR-only SEO problem.
Timeline + team
Six to ten weeks with this team:
- One full-stack engineer. Owns the rebuild end-to-end. Forty hours per week.
- One backend engineer. Owns Firestore schema design, Security Rules, and Cloud Functions. Twenty hours per week.
- One product owner. Validates parity. Five hours per week.
The schema redesign step is the biggest single risk. Get it right or you rebuild it later under load.
Cost
Migration tiers:
| Tier | Price | What you get |
|---|---|---|
| Small | $6,000 | 5–6 weeks, simple data model, Spark or low-Blaze tier |
| Medium | $12,000 | 6–8 weeks, complex denormalization, multiple Cloud Functions |
| Enterprise | $25,000+ | Compliance prep, multi-tenant data, white-glove cutover |
Firebase ongoing: $0–$50/mo for small apps on Blaze. $100–$500/mo at moderate scale. Watch read costs; they dominate the bill.
DIY vs hire decision
DIY this if:
- You have shipped a Firestore app before and understand document data modeling.
- Your app has under twenty entities and reasonable read patterns.
- You can spare six to ten weeks.
Hire help if:
- You have not used Firestore.
- Your app is heavily relational and the denormalization is non-obvious.
- You need to be live in under six weeks.
The schema design phase is what separates a successful Firebase migration from a stuck one. If your team has not done this before, the time-cost of learning often exceeds the cost of hiring an experienced Firestore engineer for the design phase.
Want a free migration assessment?
Tell us about your app. We will scope the schema redesign and the migration. Free thirty-minute call.
Book a free migration assessment
Related migrations
- Base44 to Next.js + Supabase — relational alternative if Firestore's document model is wrong for your data.
- Base44 to Vercel — pick your own backend; Vercel handles hosting.
- Base44 to self-hosted — full control if managed Firebase does not fit.