Skip to content

Instantly share code, notes, and snippets.

@JohnLBevan
Last active December 1, 2020 16:20
Show Gist options
  • Save JohnLBevan/874ba82a8d75a29b348559841bcf2ad1 to your computer and use it in GitHub Desktop.
Save JohnLBevan/874ba82a8d75a29b348559841bcf2ad1 to your computer and use it in GitHub Desktop.
A few scripts to aid in migrating db schemas in postgres. This is a hacky approach, but good for sorting out issues where environments have data with schema changes not tracked in `dotnet ef` migrations (for example).
-- note: each SQL should be run on its own, then its outputs run, before moving on.
-- this is a hacky solution, so just for those quick fix scenarios
-- rename existing tables to include an underscore on the end
SELECT format ('ALTER TABLE %I.%I RENAME TO %I;', table_schema, table_name, table_name || '_')
table_schema, table_name)
FROM information_schema.tables
where table_schema = 'cam'
-- remove any constraints on our table
SELECT format ('ALTER TABLE %I.%I DROP CONSTRAINT %I;', nsp.nspname, rel.relname, con.conname)
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel
ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp
ON nsp.oid = connamespace
WHERE nsp.nspname = 'cam'
order by con.conname -- hacky way to ensure FKs come before PKs
-- drop any remaining indexes
SELECT
format('DROP INDEX %I.%I;', n.nspname, c_ind.relname)
FROM pg_index ind
JOIN pg_class c_ind ON c_ind.oid = ind.indexrelid
JOIN pg_namespace n ON n.oid = c_ind.relnamespace
LEFT JOIN pg_constraint cons ON cons.conindid = ind.indexrelid
WHERE
n.nspname NOT IN ('pg_catalog','information_schema') AND
n.nspname !~ '^pg_toast'::TEXT AND
cons.oid IS NULL
-- then run our code to deploy the new schema (for ease run the create table statements here, and add any indexes & constraints after the data migration; otherwise you may be restricted where dependecies/FKs exist)
-- Note: you may require default contraints for cases where we have no match and the columns's not nullable
--
-- ...
--
-- then run the below to generate the statements to populate your new tables with the old data
select format('insert into %I.%I ', tc.table_schema, tc.table_name) || '(' || string_agg(format('%I', tc.column_name), ', ') || ') select ' || string_agg(format('%I', fc.column_name), ', ') || format(' from %I.%I;', fc.table_schema, fc.table_name)
, fc.table_schema, tc.table_schema, fc.table_name, tc.table_name
from information_schema.columns fc
inner join information_schema.columns tc
on fc.table_schema = tc.table_schema
and fc.table_name = tc.table_name || '_'
and fc.column_name = tc.column_name
where fc.table_schema = 'cam'
group by fc.table_schema, tc.table_schema, fc.table_name, tc.table_name
-- then run the code to deploy the new schema (anything not run in the previous bit; e.g. constraints, indexes, etc)
--
-- ...
--
-- finally, cleanup any old tables
SELECT format ('DROP TABLE %I.%I;', table_schema, table_name)
FROM information_schema.tables
where table_schema = 'cam'
and table_name like '%\_'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment