S SCHEMA.BIZ

Schema Diff

Compare two database schemas and see exactly what changed. Generate ALTER TABLE migration scripts to move from Schema A to Schema B.

Free Database Schema Diff & Migration Generator

Database schemas evolve constantly — new features need new tables, growing datasets require column type changes, and refactoring demands relationship modifications. Our Schema Diff tool compares two versions of a database schema and generates the exact ALTER TABLE statements needed to migrate from one to the other. All processing runs in your browser, so your database structures never leave your device.

What Are Database Migrations and Why Do They Matter?

A database migration is a set of SQL statements that transforms a database schema from one version to another. When you add a feature that requires a new table, change a column's type to accommodate different data, or add an index to improve query performance, you are performing a schema migration. In production systems, migrations are the controlled, repeatable mechanism for evolving database structure alongside application code.

Migrations matter because database schema changes are among the riskiest operations in software deployment. Unlike application code, which can be rolled back by deploying a previous version, schema changes alter the persistent state of your system. A dropped column deletes data permanently. A type change can fail halfway through on a table with millions of rows, leaving the database in an inconsistent state. A missing index can cause a query that ran in milliseconds to take minutes under production load.

Good migration practices reduce this risk by making schema changes explicit, versioned, reviewable, and reversible. Each migration is a file in version control with a clear UP script (apply the change) and an optional DOWN script (reverse it). The team reviews migrations in pull requests just like application code. Migration frameworks (Flyway, Liquibase, Prisma Migrate, Alembic, Rails ActiveRecord Migrations) track which migrations have been applied to each environment, preventing double-application or missed migrations.

The Risks of Manual Schema Changes in Production

When developers make schema changes by running ad-hoc ALTER TABLE statements directly against a production database, several things go wrong. First, there is no record of what changed or why. Three months later, when someone asks why the orders table has a legacy_status column, nobody remembers. Second, the same change must be manually replicated in every environment — development, staging, QA, production — and any inconsistency between environments leads to bugs that only appear in production.

Third, manual changes cannot be reviewed before execution. A typo in an ALTER TABLE statement can drop the wrong column or change the wrong type. Without a review step, these mistakes reach production. Fourth, manual changes are not easily reversible. If you realize after deploying that the migration broke something, you need to figure out the reverse operation under pressure, which is when mistakes compound.

Schema diffing tools address these problems by generating migration scripts automatically from a declarative target state. Instead of writing ALTER TABLE statements by hand, you define what the schema should look like after the change, and the tool calculates the exact steps to get there. The generated script can be reviewed, tested, and version-controlled like any other code artifact.

How Schema Diffing Works: Comparing Two Database States

Schema diffing compares two snapshots of a database schema — typically the current production schema (Schema A) and the desired target schema (Schema B). The diff engine parses both schemas into an internal representation of tables, columns, types, constraints, default values, indexes, and foreign key relationships, then computes the set of changes needed to transform A into B.

Our diff engine detects six categories of changes:

  • Tables added. A table exists in Schema B but not in Schema A. The migration creates the table with all its columns and constraints.
  • Tables removed. A table exists in Schema A but not in Schema B. The migration drops the table, but only after dropping any foreign keys that reference it.
  • Columns added. A column exists in a table in Schema B that is not present in the same table in Schema A. The migration adds the column with its type, constraints, and default value.
  • Columns removed. A column in Schema A is absent from Schema B. The migration drops the column, first removing any foreign key constraints that reference it.
  • Columns modified. A column exists in both schemas but with different properties — a different data type, a changed default value, a constraint added or removed (NOT NULL, UNIQUE). The migration alters the column.
  • Foreign key changes. A relationship has been added, removed, or modified (different referenced table or column, different ON DELETE action).

Each change is displayed with clear before/after comparisons and color coding — green for additions, red for removals, amber for modifications. The generated migration SQL is properly ordered: foreign key constraints are dropped before referenced columns or tables are removed, and new tables are created before tables that reference them.

Migration SQL Generation

The migration generator creates a complete SQL script that transforms Schema A into Schema B, handling dialect-specific syntax automatically:

  • PostgreSQL uses ALTER COLUMN ... TYPE with a USING clause for type conversions, SET NOT NULL / DROP NOT NULL for constraint changes, and ALTER COLUMN ... SET DEFAULT for default values.
  • MySQL uses MODIFY COLUMN for type and constraint changes, which requires restating the full column definition including the new type, nullability, and default.
  • SQL Server uses ALTER COLUMN with the full new type definition. Changing nullability requires a separate ALTER COLUMN statement.
  • SQLite has limited ALTER TABLE support — it can add columns but cannot modify or drop them. For column changes, the tool notes that SQLite requires recreating the table (create new table, copy data, drop old table, rename new table).

Each migration statement includes a comment explaining what changed and why, making the script self-documenting for code review.

Best Practices for Database Migration Workflows

Always back up before migrating. Even well-tested migrations can fail on production data that contains edge cases not present in staging — NULL values in columns you are changing to NOT NULL, strings too long for a new VARCHAR limit, or foreign key references to rows that no longer exist. A backup lets you recover from migration failures without data loss.

Test on a copy of production data. Schema changes that execute instantly on empty tables can take hours on tables with millions of rows. Adding a NOT NULL column to a table with 50 million rows requires writing a default value to every existing row. Changing a column type from VARCHAR to INTEGER requires validating and converting every existing value. Test your migration against a realistic data volume to discover performance issues before they cause a production outage.

Make migrations reversible. For each UP migration, write a corresponding DOWN migration that reverses the change. This lets you roll back if the migration causes unexpected application errors. Not all migrations are perfectly reversible (you cannot un-drop a column and recover its data), but having a DOWN script for the structural changes makes rollbacks much faster.

Deploy schema changes separately from application code. When a migration and the code that depends on it are deployed simultaneously, a failure in either one leaves the system in an inconsistent state. Instead, deploy migrations in two phases: first, make additive changes (new tables, new columns) that the current application code can ignore. Then deploy the application code that uses the new schema. Finally, remove anything that is no longer needed (old columns, old tables) in a subsequent migration.

Version your migrations. Keep migration files in version control alongside your application code. Use a migration framework (Flyway, Liquibase, Prisma Migrate, Alembic) to track which migrations have been applied to each environment. Never edit a migration that has already been applied — create a new migration instead.

Common Migration Pitfalls

Column renames vs. drop/add. If you rename a column, some migration tools generate a DROP COLUMN followed by an ADD COLUMN rather than an ALTER COLUMN RENAME. This destroys all existing data in the column. Our diff tool detects this pattern when the old and new columns have the same type, and flags it as a potential rename rather than a drop/add. Always review the generated migration to confirm the intended operation.

Data loss from type changes. Changing a column from TEXT to VARCHAR(50) will fail if any existing row contains a string longer than 50 characters. Changing from VARCHAR to INTEGER will fail if any existing value cannot be parsed as a number. Always check your existing data against the new constraints before running the migration. A simple query like SELECT COUNT(*) FROM users WHERE LENGTH(name) > 50 can prevent a migration failure.

Foreign key ordering. You cannot drop a table that is referenced by foreign keys in other tables. You cannot add a foreign key to a table that references a table that does not exist yet. Migration scripts must order operations correctly: drop foreign key constraints first, then drop or modify tables, then create new tables, then add new foreign key constraints. Our diff tool handles this ordering automatically.

Table locking on large tables. ALTER TABLE operations on tables with millions of rows can lock the table for the duration of the change, blocking all reads and writes. In MySQL, adding a column or changing a type on a large table can lock it for minutes or hours. Use online DDL tools (pt-online-schema-change for MySQL, pg_repack for PostgreSQL) or the database's built-in online DDL support to perform zero-downtime migrations on large tables.

Related Tools

Design your target schema visually using our Schema Designer, or paste existing SQL into our SQL to Diagram tool to visualize your current schema before comparing. Browse our Schema Templates library for well-designed starting points. For API-level schema work, see our JSON Schema Validator. To learn more, read our Database Design Fundamentals guide.

Frequently Asked Questions

Is my database schema sent to a server?

No. The diff engine runs entirely in your browser. Both schemas and the generated migration script stay on your device. This is critical when working with proprietary production schemas.

What SQL dialects are supported?

The tool generates migration SQL for PostgreSQL, MySQL, SQLite, and SQL Server. Each dialect uses the correct syntax for ALTER TABLE operations, type names, constraint modifications, and quoting conventions. SQLite has limited ALTER TABLE support, which the tool documents in the output.

Can I paste existing SQL to compare two schemas?

Yes. Paste CREATE TABLE statements from your current database in the Schema A pane and the target schema in the Schema B pane. The parser handles standard DDL syntax for all four supported dialects. You can also use our lightweight DSL syntax if you prefer.

Does the tool detect column renames?

The diff engine flags potential renames when a column is removed and a new column with the same type is added in the same table. It displays this as a warning so you can choose the correct operation — a rename preserves data while a drop/add destroys it. Always review flagged changes before running the migration.

How do I handle migrations for tables with millions of rows?

For large tables, ALTER TABLE operations can lock the table and take a long time. Use online DDL tools like pt-online-schema-change (MySQL) or pg_repack (PostgreSQL) to perform changes without blocking reads and writes. Test migration performance against a copy of your production data to estimate execution time before running in production.