BASE44DEVS

FIX · DATA · MEDIUM

Base44 No Bulk Delete — Admin Tasks Don't Scale

Base44's SDK does not expose a server-side bulk delete with filters. Admins must either delete records one by one through the UI or fetch all matching records to the client before deleting, which fails at scale and burns memory. The fix is a backend Deno function that paginates through filtered records and deletes them server-side, plus a migration plan if you have over 10,000 records.

Last verified
2026-05-01
Category
DATA
Difficulty
MODERATE
DIY possible
YES

What's happening

You have an admin panel. You need to delete 5,000 expired sessions, or 20,000 abandoned cart records, or last quarter's archive of 50,000 log entries. You open base44's editor, look for a bulk-delete affordance, and there is none. You ask the AI agent to write one. The agent generates a function that fetches all matching records to the client, then loops through them. It runs out of memory at 5,000 rows, or hits a 429 rate limit, or simply hangs.

A user on the feedback board summarized the production impact: "Platform lacks server-side bulk delete functionality...creates performance degradation, scalability failure, and technical debt" — Chris Cotton. The same complaint surfaces under "Critical Bug/SDK Missing" with significant upvote support.

Practically this means admin tasks that would take 10 seconds in any proper backend take hours of manual clicking, or break entirely past a few thousand records. Teams running marketplace platforms, fintech apps, or anything log-heavy hit this within their first month of real production usage.

Why this happens

Base44's SDK was architected for single-record CRUD. Each delete is one HTTP request with one record ID. There is no deleteMany(filter) primitive in the SDK and no equivalent UI affordance with filtering. The AI agent, when asked for "bulk delete", reaches for the only tool available: fetch all matching records, then loop and delete them one by one.

That approach hits three platform limits, in order:

  1. The 5,000-item-per-request limit. Introduced 2025-11-27, this caps any single SDK query at 5,000 returned rows. Datasets larger than that need pagination.
  2. The Deno runtime memory ceiling. Loading tens of thousands of records into a single function invocation runs out of memory and the function dies mid-loop, leaving the deletion partially complete.
  3. Rate limits. Hammering the SDK's per-record delete endpoint thousands of times in a row triggers the rate-limit-429-production-throttle circuit breaker. The function fails halfway through with a 429.

The combination means there is no naive solution. A correct bulk delete must paginate, batch, and pace itself. Writing that correctly requires knowing all three constraints — knowledge the AI agent does not reliably encode in its generated code.

The deeper architectural cause is that base44 prioritizes the AI agent's ability to write "obvious" code over giving developers production primitives. Bulk operations carry partial-failure semantics, transaction concerns, and idempotency requirements that the agent handles unreliably. Rather than ship a primitive the agent might misuse, the platform shipped nothing. The cost is pushed onto the operator.

Source: feedback.base44.com "Critical Bug/SDK Missing" thread; SDK reference at base44.com/docs; the 5,000-row cap announcement in base44's 2025-11-27 changelog.

How to reproduce

  1. Create a base44 entity with a boolean archived field.
  2. Generate or import 10,000 records, half with archived: true.
  3. Build an admin page with a "Delete all archived" button.
  4. Ask the AI agent to wire the button to delete all archived: true records.
  5. Click the button. Observe one of three failure modes: the function runs out of memory, hits 429s halfway through, or appears to succeed but only deletes ~5,000 records (the 5,000-row query cap).
  6. Refresh and count remaining archived: true records. Confirm the operation was partial or failed.

Step-by-step fix

The fix is a paginated, rate-limit-aware backend function. Five steps.

1. Add a Deno backend function

In the editor, create a new backend function called bulkDeleteArchived. Choose the Deno runtime, not a frontend handler. The function must run server-side to access the SDK with full quota.

2. Implement pagination + batching

// functions/bulkDeleteArchived.ts
import { base44 } from '@base44/sdk';

export async function bulkDeleteArchived(filter: Record<string, unknown>) {
  const BATCH_SIZE = 200;
  const PAGE_DELAY_MS = 500;
  let totalDeleted = 0;

  while (true) {
    const page = await base44.entities.YourEntity.list({
      filter,
      limit: BATCH_SIZE,
    });

    if (page.length === 0) break;

    for (const record of page) {
      await base44.entities.YourEntity.delete(record.id);
    }

    totalDeleted += page.length;
    await new Promise((resolve) => setTimeout(resolve, PAGE_DELAY_MS));
  }

  return { deleted: totalDeleted };
}

Two notes. First, BATCH_SIZE = 200 keeps each query well under the 5,000-row cap and avoids loading too much into memory. Second, PAGE_DELAY_MS = 500 paces deletes to stay clear of the 429 rate-limit threshold. Tune both numbers to your dataset and rate-limit observations.

3. Wire the admin UI to the function

Replace any existing client-side bulk-delete logic with a single call to the new backend function. Show a progress indicator; the function may take minutes for large datasets.

4. Add idempotency and resumability

If the function fails partway through, you want it safe to re-run. Because each iteration queries fresh records that match the filter, the function naturally resumes. But add a maximum iteration cap (e.g., 1,000 pages) to prevent runaway loops on bad filters.

5. Test on a staging copy first

Deletes are irreversible. Run the function on a cloned dataset before pointing it at production. Validate the count of deleted records matches expectations.

DIY vs hire decision

DIY is realistic if you are comfortable in Deno/TypeScript and your dataset is under ~100,000 records. The function above is small. The hard parts are tuning batch size and delay against your specific rate-limit behavior, plus integrating cleanly with existing admin UX.

Hire if any of these apply:

  • Dataset exceeds 100,000 records and you need it done overnight.
  • Bulk delete must coordinate with other operations transactionally (delete archived orders + their line items + their payment refunds).
  • Your team has no Deno experience and the AI agent's code keeps failing on rate limits.

We have shipped this exact function for ~30 base44 clients. We know the rate-limit thresholds and the failure modes that are not obvious from the docs.

Need this fix shipped this week?

Standard scope: production-grade bulk delete function, idempotent, rate-limit aware, with admin UI integration and a tested rollback plan. Fix-sprint pricing, 48-hour turnaround.

Book a fix sprint or order a $497 audit if you want to confirm scope first.

QUERIES

Frequently asked questions

Q.01Why doesn't base44 ship a bulk delete primitive?
A.01

Base44's SDK was designed around single-record CRUD operations to keep the AI agent's mental model simple. Bulk operations require thinking about partial failures, transactions, and rate limits — concepts the agent handles poorly. Rather than expose a primitive the agent might misuse, the platform omitted it. The omission is documented in the feedback board as a 'Critical Bug/SDK Missing' production blocker.

Q.02What is the actual workaround production users adopt?
A.02

A custom backend function written in the Deno runtime that accepts a filter, paginates server-side through matching records in batches of 100–500, and deletes each batch with a brief delay to avoid rate limits. The function returns a count of deleted records. This pattern works but you write it yourself; it is not a stock SDK call.

Q.03Will the AI agent generate a working bulk delete function for me?
A.03

Sometimes. The agent often produces code that looks plausible but skips pagination, ignores rate limits, or fetches everything to the function's memory before deleting — which fails on large datasets. Test the agent's output with a 10,000-row dataset before trusting it on production. Expect to write or fix at least the pagination logic by hand.

Q.04How does this connect to base44's 5,000-item-per-request limit?
A.04

Base44 enforces a 5,000-item maximum per query response (introduced 2025-11-27). Any bulk operation that fetches all matches to the client breaks immediately above 5,000 records. Server-side pagination in a Deno function is the only path that respects the limit. Client-side workarounds were never going to scale; the platform limit just makes that explicit.

Q.05Is this enough of a problem to migrate off base44?
A.05

On its own, no — a custom function fixes it. But missing-bulk-delete is one symptom of a deeper pattern: the SDK lacks operational primitives a production team needs (bulk update, transactions, scheduled jobs without active users, real-time webhooks). If you keep hitting the same class of missing primitive, the cumulative drag is the migration trigger, not any single gap.

Q.06When should I hire someone instead of writing the function myself?
A.06

If your dataset exceeds 100,000 records, if your delete needs to be transactionally consistent with other operations, or if you cannot afford the AI agent's typical errors on infrastructure code. We have written this exact function for clients many times and ship a tested version in 24–48 hours.

NEXT STEP

Need this fix shipped this week?

Book a free 15-minute call or order a $497 audit. We will respond within one business day.