Bubble Database to PostgreSQL: Migration Walkthrough
A step-by-step walkthrough for migrating your Bubble database to PostgreSQL — covering data export, schema design, type mapping, relationship handling, and data validation.
MigrateLab Team
Migration Experts

Why This Migration Matters
Your Bubble database contains every piece of data your application has ever created: users, transactions, content, configurations, files. Migrating it correctly is the foundation of your entire migration. Get the database wrong, and everything built on top of it — API routes, UI components, business logic — will be wrong too.
The good news: Bubble's data model is simple. Data types map to tables. Fields map to columns. Linked data types map to foreign keys. The concepts translate cleanly. The challenge is in the details — Bubble's unique ID system, its handling of lists and linked data, and its inconsistent data formats in CSV exports.
PostgreSQL is the ideal target for several reasons: it's the most popular relational database, it has excellent JSON support for semi-structured data, it's supported by every hosting provider, and ORMs like Prisma and Drizzle provide type-safe access from TypeScript. Your Next.js application will interact with PostgreSQL through an ORM, giving you autocomplete, type checking, and migration management.
Step 1: Export Your Data from Bubble
Bubble provides CSV export for each data type. Here's the process:
- Open your Bubble editor and go to the "Data" tab
- Select "App Data" to see your data types
- For each data type, click on it and then click "Export data"
- Bubble generates a CSV file and emails you a download link
- Download and save each CSV with a clear name (users.csv, orders.csv, etc.)
Important considerations during export:
- Export all data types, not just the obvious ones. Bubble creates some data types automatically (like User) and you may have created auxiliary data types that support your main ones. Export everything.
- Note the field order and names. Bubble's CSV export uses the field display names, which may differ from the internal names used in workflows. Document the mapping.
- Export during low-traffic periods. Large exports can impact your Bubble app's performance. Run them during off-hours.
- Download files and images separately. File and image fields in the CSV contain URLs pointing to Bubble's S3 storage. You must download these files before canceling your Bubble subscription.
Step 2: Analyze Your Data Structure
Before designing your PostgreSQL schema, you need to understand what you're working with. Open each CSV and document:
- Column names and their Bubble types — text, number, yes/no, date, file, image, geographic address, list, linked data type
- Relationships between data types — which data types link to which others, and whether those links are one-to-one, one-to-many, or many-to-many
- List fields — fields that contain multiple values (exported as comma-separated strings in the CSV)
- Required vs optional fields — which fields always have data and which are sometimes empty
- Data volumes — row counts for each data type, which affect performance decisions like indexing
Create a spreadsheet or document that maps every Bubble data type and field to its intended PostgreSQL table and column. This document is your migration spec — refer to it throughout the process.
Step 3: Design Your PostgreSQL Schema
Now translate your Bubble data structure into a PostgreSQL schema. Use Prisma or Drizzle to define the schema in TypeScript:
Type Mapping
Every Bubble field type has a PostgreSQL equivalent:
- Bubble text → VARCHAR(255) or TEXT. Use VARCHAR for short strings (names, titles) with a known maximum length. Use TEXT for long-form content (descriptions, notes) with no practical limit.
- Bubble number → INTEGER, BIGINT, NUMERIC, or FLOAT. Use INTEGER for whole numbers (quantities, counts). BIGINT for large numbers (IDs from external systems). NUMERIC(10,2) for money (never use FLOAT for currency). FLOAT for scientific/measurement values.
- Bubble yes/no → BOOLEAN. Direct mapping. Defaults to false if not specified.
- Bubble date → TIMESTAMP WITH TIME ZONE. Always store timestamps with timezone information. Bubble dates include timezone data in the CSV export, so parse them carefully.
- Bubble file/image → TEXT. Store the URL after re-hosting the file to your own storage (S3, Cloudflare R2, or local filesystem). Don't store Bubble's S3 URLs — they'll break when you cancel.
- Bubble geographic address → JSONB. Store as a JSON object with lat, lng, formatted_address, and component fields. Alternatively, create separate latitude (FLOAT) and longitude (FLOAT) columns plus an address (TEXT) column.
- Bubble option sets → ENUM or VARCHAR. For small, fixed sets of values (status: draft/published/archived), use PostgreSQL ENUM types. For option sets that might change, use VARCHAR with application-level validation.
Handling Bubble's Unique IDs
Every record in Bubble has a unique ID — a long alphanumeric string like "1609459200000x123456789012345". These IDs appear in your CSV exports and are used to link records together.
You have two options for handling these in PostgreSQL:
- Generate new IDs and create a mapping table. Create auto-incrementing integer IDs or UUIDs for your PostgreSQL tables. Maintain a temporary mapping table that links Bubble IDs to new IDs. Use this mapping when importing linked records. This is the cleaner approach for new applications.
- Store Bubble IDs as a column. Add a bubble_id VARCHAR column to each table and store the original Bubble ID. Use your own ID as the primary key but keep the Bubble ID for reference. This is useful if external systems reference Bubble IDs.
We recommend option 1 for most migrations. Generate clean integer or UUID primary keys and use a temporary mapping during import. Once all data is imported and verified, you can drop the mapping table.
Step 4: Handle Relationships and Linked Data
Relationships are the trickiest part of the migration. Bubble has three relationship patterns:
One-to-One (Linked Data Type)
A Bubble field that links to another data type (e.g., Order has a "Customer" field linking to User). In the CSV, this appears as the linked record's Bubble unique ID.
In PostgreSQL, this becomes a foreign key column. Your Order table gets a customer_id column that references the User table's primary key. During import, you look up the Customer's Bubble ID in your mapping table to get the corresponding PostgreSQL ID.
One-to-Many (Reverse Lookup)
When a User has many Orders, Bubble doesn't store this on the User side — you query for it using "Do a search for Orders where Customer = This User." In PostgreSQL, the relationship exists through the foreign key on the "many" side (Order.customer_id). No special handling needed during import — the foreign key on Order automatically creates the one-to-many relationship.
Many-to-Many (List of Linked Data Types)
A Bubble field that contains a list of linked records (e.g., a Project has a "team_members" field containing a list of Users). In the CSV, this appears as a comma-separated string of Bubble unique IDs.
In PostgreSQL, many-to-many relationships require a junction table (also called a join table). For Project ↔ User, you create a project_members table with two foreign key columns: project_id and user_id. During import, parse the comma-separated Bubble IDs, look up each one in your mapping table, and insert a row into the junction table for each relationship.
Step 5: Write the Import Script
With your schema designed and your CSV files ready, write an import script. Use TypeScript with Prisma for type-safe database operations. The script should:
- Parse CSV files. Use a library like csv-parse or papaparse to read each CSV file into an array of objects.
- Import tables without dependencies first. Start with data types that don't link to other data types (or link only to the built-in User type). This ensures that when you import dependent records, the referenced records already exist.
- Build the ID mapping. As you import each record, store a mapping from Bubble ID to new PostgreSQL ID. Use a Map<string, number> for fast lookups.
- Import dependent tables using the mapping. For each linked field, look up the Bubble ID in the mapping to get the PostgreSQL foreign key value.
- Import junction table records for many-to-many relationships. Parse the comma-separated Bubble IDs, resolve each to a PostgreSQL ID, and insert the junction records.
- Validate the import. Count records in each table and compare to the CSV row counts. Spot-check relationships by querying related records and comparing to Bubble.
Import Order Strategy
The import order matters because foreign key constraints require referenced records to exist before referencing records. A typical import order:
- Users (no dependencies)
- Categories, Tags, and other lookup tables (no dependencies)
- Primary data types that reference Users (Orders, Projects, etc.)
- Secondary data types that reference primary data types (Order Items, Comments, etc.)
- Junction table records for many-to-many relationships
If you have circular dependencies (rare, but possible in Bubble), import records without the circular foreign key first, then update them in a second pass.
Step 6: Data Validation and Cleanup
Bubble's lack of strict data validation means your exported data likely has issues that need cleaning before import:
Common Data Quality Issues
- Inconsistent date formats. Bubble's CSV export sometimes uses different date formats for the same field. Normalize all dates to ISO 8601 format before import.
- Empty strings vs null. Bubble doesn't distinguish between "no value" and "empty string." Decide whether empty strings should be NULL in PostgreSQL and handle consistently.
- Orphaned references. Deleted records leave behind references. A field might contain a Bubble ID that points to a record that no longer exists. Your import script should handle these gracefully — either skip the reference or log it for manual review.
- Duplicate records. Bubble doesn't prevent duplicates by default. Identify and resolve duplicates before import. Decide on deduplication rules: keep the newest record? Merge fields from both?
- HTML in text fields. Bubble's rich text editor stores HTML in text fields. Decide whether to keep the HTML, strip it to plain text, or convert it to Markdown for your new application.
- Number formatting. Currency values might be stored as "$1,234.56" (string with formatting) instead of 1234.56 (number). Parse and clean number fields before import.
Validation Checks After Import
Run these checks after your import completes:
- Record counts: Does each table have the expected number of rows?
- Relationship integrity: Do foreign keys point to valid records?
- Data completeness: Are required fields populated?
- Date accuracy: Do dates match the originals (watch for timezone shifts)?
- File availability: Are all referenced file URLs accessible?
- Search functionality: Can you find records that existed in Bubble?
Step 7: Optimize for Production
Once your data is imported and validated, optimize your PostgreSQL database for production use:
Indexing
Add indexes for columns you query frequently. At minimum:
- Primary key indexes (created automatically)
- Foreign key indexes (Prisma creates these automatically)
- Email and username columns (for login and search)
- Status and type columns (for filtering)
- Date columns used for sorting (created_at, updated_at)
- Any column used in WHERE clauses in your application queries
Query Performance
With proper indexing, your PostgreSQL queries will be orders of magnitude faster than Bubble's data operations. A query that took 2 seconds in Bubble — searching 10,000 records with multiple filters — completes in 5-20ms with PostgreSQL. This improvement is one of the most immediately noticeable benefits of migration.
Backups
Set up automated backups from day one. Use pg_dump for logical backups on a daily cron schedule. If you're using a managed PostgreSQL service (Supabase, Neon, Railway), backups are typically included. For self-hosted PostgreSQL, configure automated backups to S3 or another remote storage location.
In Bubble, you had no control over backups. In PostgreSQL, you can back up your entire database, restore to any point in time, and test your backup restoration process. This level of control is a significant upgrade for data safety.
| Feature | Bubble Data Type | PostgreSQL Equivalent |
|---|---|---|
| text | Untyped string | VARCHAR / TEXT |
| number | Generic number | INTEGER / NUMERIC / FLOAT |
| yes/no | Boolean | BOOLEAN |
| date | Date with timezone | TIMESTAMP WITH TIME ZONE |
| file / image | Bubble S3 URL | TEXT (self-hosted URL) |
| geographic address | Composite type | JSONB or lat/lng columns |
| list (of text) | Comma-separated | TEXT[] array |
| linked data type | Bubble unique ID | FOREIGN KEY reference |
| list of linked types | Comma-separated IDs | Junction table |
| option set | Static dropdown | ENUM or VARCHAR |
Database Migration Process
Export CSV files from Bubble
Go to the Data tab in your Bubble editor and export every data type as CSV. Download files and images separately — their S3 URLs will stop working after you cancel.
Tip: Run exports during off-hours to avoid impacting your live app's performance.
Design your PostgreSQL schema
Map every Bubble data type to a table and every field to a column with the appropriate PostgreSQL type. Define relationships, constraints, and indexes in a Prisma schema file.
Tip: Use Prisma's schema validation to catch type mismatches before running migrations.
Write and run the import script
Parse CSVs, import tables in dependency order, build ID mappings for foreign key resolution, and import junction table records for many-to-many relationships.
Tip: Import to a staging database first. Only import to production after thorough validation.
Validate and clean data
Check record counts, verify relationship integrity, fix data quality issues (dates, nulls, duplicates), and run functional tests against your imported data.
Tip: Write automated validation queries that you can re-run after any data corrections.
Optimize and back up
Add indexes for frequently queried columns, set up automated backups, and benchmark query performance against your application's actual query patterns.
Tip: Use EXPLAIN ANALYZE on your most frequent queries to verify indexes are being used.
Need help migrating from Bubble? We handle the technical heavy lifting so you can focus on your business.