What's happening
A user enters data into your Base44 app. The save button completes. The toast says "Saved." They close the tab and come back two hours later. The data is gone. They call you. You check your admin view. The record is either missing entirely or visible to you but invisible to them.
This is one of the most-cited critical issues on the feedback board. The post titled "Too Many Bugs" describes it directly: "User data disappearing upon returning to the application...clearly unacceptable." Other users have reported the same pattern in comment threads under feedback posts on the credit system and the AI agent regression loop, suggesting a shared root cause in the data layer rather than feature-specific bugs.
The user-facing damage is severe. Lost form submissions, missing inventory edits, vanished customer records. Trust erodes immediately. Worse, because there is no error event, neither you nor your monitoring will see anything until the user complains.
Why this happens
Three independent failure modes converge into the same symptom.
Write acknowledgment without commit confirmation. Base44's SDK returns success on writes that have been accepted by the platform but may not have durably committed. If the user navigates away or the browser unloads before commit, the write can be lost. The SDK does not surface this as an error because, from its perspective, it handed off the write successfully.
SDK local-cache pollution across sessions. The SDK caches collection results in browser storage. Stale cache entries from previous sessions or previous users can return as the "current" collection on session start. The user sees an old or empty collection, assumes data loss, and reports a bug. The data is in the database; the cache is lying.
Row-Level Security misconfiguration. Base44's default visibility lets every user see every row. Teams retrofit RLS to fix this. A common mistake: writing an RLS predicate that filters on created_by = user.id when the actual column is user_id, or filtering on a session-bound variable that is not yet populated on the first request after login. The write succeeded. The read filters it out. The user sees nothing.
A fourth, less common cause is the documented platform bug "Apps Fail After Reverting Changes," where post-revert deploys throw "Unexpected error: please contact support@modal.com" and can corrupt session state. If you reverted recently and disappearing-data complaints started, the revert is likely your trigger.
Sources: docs.base44.com/Community-and-support/Troubleshooting, feedback.base44.com posts "Too Many Bugs" and "Fundamental Issues," merebase.com/vibe-coding-platforms-seo.
How to reproduce
- Build a Base44 app with a basic form that writes one record per submission.
- Submit a record as User A. Confirm it saves. Note the record ID if visible.
- Close the tab.
- Reopen the app. Sign in again as User A.
- Open the collection view. Check whether the record from step 2 is present.
- If present, repeat steps 1-5 but submit the form, then immediately close the tab within 200ms of the success toast (use Cmd+W or scripted browser close). Some writes will be lost.
- To reproduce the cache failure independently, log in as User A, write a record, log out, log in as User B, and check whether User B can see User A's record (or vice versa with stale cache).
- To reproduce the RLS failure: configure RLS on the collection and try the test as a third user — records that should be visible may be filtered out incorrectly.
If you can reproduce any of these, you have the bug class. Different users will hit different variants.
Step-by-step fix
1. Add write-confirmation reads
Do not trust the SDK's success callback. After every write, do an immediate read of the same record by ID and verify the persisted state matches.
async function saveOrderConfirmed(order: OrderInput): Promise<Order> {
const written = await base44.collection("orders").create(order);
// Confirm the write durably committed by reading it back.
const verified = await base44.collection("orders").get(written.id);
if (!verified || verified.total !== order.total) {
throw new Error(
`Order write not confirmed for id ${written.id}. ` +
`Expected total ${order.total}, got ${verified?.total ?? "missing"}.`
);
}
return verified;
}
This catches the acknowledged-but-not-committed class of bugs. It costs one extra read per write. It is worth it.
2. Clear SDK cache on session start
On every login (and ideally every session-resume), invalidate the collections your app depends on.
async function onSessionStart(userId: string) {
// Force-refresh critical collections from the server, bypassing cache.
await base44.collection("orders").list({ where: { user_id: userId }, cache: "none" });
await base44.collection("profiles").list({ where: { user_id: userId }, cache: "none" });
// Add every user-scoped collection your UI depends on.
}
If cache: "none" is not supported in your SDK version, manually clear the relevant keys from localStorage and sessionStorage before your first read. Document this as a session-start ritual.
3. Audit RLS policies in the admin context
Open your Base44 data settings. For every collection, list the active RLS policies. For each policy, write down:
- The exact predicate (e.g.,
user_id = auth.uid()). - The actual column name on the table.
- The exact field used at write time when records were created.
Mismatches between any of these three are your bug. Common failures:
- Policy filters on
user_idbut the agent generated code wrotecreated_by. - Policy filters on
auth.uid()but the SDK is providingauth.user_id. - Policy is correct but a subset of records were created before RLS was enabled and have null user fields.
Fix the policy and backfill any orphaned records explicitly.
4. Add a recovery path for empty collections
If a user's collection comes back empty but the user has a non-zero record count in the admin view, your read is filtered or cached incorrectly. Detect this and refetch.
async function getOrdersWithRecovery(userId: string): Promise<Order[]> {
let orders = await base44.collection("orders").list({ where: { user_id: userId } });
if (orders.length === 0) {
// Suspicious: user has been active. Refetch with no cache.
orders = await base44.collection("orders").list({
where: { user_id: userId },
cache: "none",
});
}
return orders;
}
This is a safety net, not a real fix. If the recovery path actually returns records that the first call missed, you have a cache or RLS bug to address upstream.
5. Log every write to an external sink
Send a copy of every successful write to a logging service (Logtail, BetterStack, or even a simple Vercel function that writes to a file). When a user reports lost data, you can prove whether the write happened and what the payload was.
async function saveWithAudit(collection: string, payload: object) {
const result = await base44.collection(collection).create(payload);
await fetch("https://your-audit-sink/write", {
method: "POST",
body: JSON.stringify({
collection,
payload,
result_id: result.id,
timestamp: Date.now(),
user: payload.user_id ?? null,
}),
});
return result;
}
This does not prevent loss. It makes loss provable, which lets you escalate platform issues with evidence.
6. If reverts caused this, do not revert again
The "Apps Fail After Reverting Changes" bug (docs.base44.com) is real. If your data-loss complaints started after a revert, do not revert again to fix it. Make forward-only changes. If you must revert, fully redeploy and verify in a staging environment first.
DIY vs hire decision
DIY this if: You have a single failure mode (only RLS, only cache, or only acknowledgment race), you can pause production for a day to verify the fix, and your project is small enough to instrument every write.
Hire help if: You have customer-facing data loss, you have multiple failure modes layered together (RLS + cache + agent regressions touching write paths), or your project handles regulated data (healthcare, finance, anything where loss has compliance implications). This is a complex-fix engagement: we audit RLS across every collection, instrument writes with confirmation reads, set up an external write-audit sink, and verify zero loss over a 7-day soak.
Need this fixed urgently?
Data loss is a complex-fix engagement. We audit your data layer end to end, ship the confirmation-read pattern across every write path, set up an external audit sink, and run a 7-day verification soak before handoff. Fixed price.
Start a complex-fix engagement for data loss
Related problems
- AI agent regression loop breaks working code — agent regenerations near write paths frequently introduce silent data-handling bugs.
- SSO bypass and auth vulnerabilities — many RLS misconfigurations descend from auth-context misunderstanding.
- No bulk delete — admin tasks don't scale — the same data-layer fragility surfaces here.