Skip to content

Instantly share code, notes, and snippets.

@fiftin
Created October 5, 2015 07:04
Show Gist options
  • Select an option

  • Save fiftin/18221054c7777e1f1207 to your computer and use it in GitHub Desktop.

Select an option

Save fiftin/18221054c7777e1f1207 to your computer and use it in GitHub Desktop.
Convert PostgreSQL to SQLite
1. Dump the data only sql to file
$ pg_dump --data-only --inserts YOUR_DB_NAME > dump.sql
2. scp to local
3. Remove the SET statements at the top
such as:
SET statement_timeout = 0;
SET client_encoding = 'SQL_ASCII';
4. Remove the setval sequence queries
such as:
SELECT pg_catalog.setval('MY_OBJECT_id_seq', 10, true);
5. Replace true => ‘t’ and false => ‘f’
-- These:
INSERT INTO table_name VALUES (1, true, false);
-- Should be replace to:
INSERT INTO table_name VALUES (1, 't', 'f');
6. Add BEGIN; and END; to wrap the whole thing as a trasaction
7. Import
$ rm db/development.sqlite3
$ rake db:migrate
$ sqlite3 db/development.sqlite3
> delete from schema_migrations;
> .read dump.sql
> .exit
@makrmark
Copy link
Copy Markdown

makrmark commented Jan 8, 2024

This was very helpful but may be out of date now.
I found I had to remove "public" prefix from table names, and instead of "t"/"f" I could use true/false values per the export.

Not sure if it was just my sequence but also performed:
delete from ar_internal_metadata;

(along with delete from schema_migrations;)

sqlite> .version
SQLite 3.39.5 2022-10-14 20:58:05 554764a6e721fab307c63a4f98cd958c8428a5d9d8edfde951858d6fd02daapl
clang-14.0.3

@makrmark
Copy link
Copy Markdown

makrmark commented Jan 8, 2024

Just to wrap this up I had to change some postgresql JSONB fields to JSON, and some Integer/String Array fields to JSON fields.

I also had to use different date calculation queries using sqlite's JULIANDAY and the TRIM functions were different in my search/filter queries and indexes. But that was pretty much it!

@hirefrank
Copy link
Copy Markdown

I turned this into a simple, but effective bash script:
https://github.com/hirefrank/pg-to-sqlite

@equivalent
Copy link
Copy Markdown

equivalent commented Jun 14, 2024

I'm on Postgresql 16 and the default dump file has non compatible format of INSERT INTO with the one sqlite uses. I found out if you dump DB with --attribute-inserts it will work

pg_dump --attribute-inserts --data-only -d my_psql_db > /tmp.dump.sql

reference pg_dump docs

https://hhh.how/notes/2067-convert-postgresql-to-sqlite

@aaronmoodie
Copy link
Copy Markdown

aaronmoodie commented Jul 1, 2024

This was super helpful. Thank you!

I'm migrating a Rails database, and thought it worth adding that the INSERTS for both the ar_internal_metadata table and schema_migrations table should be removed as these are created when running rails db:migrate.

They will raise UNIQUE constraint failed errors otherwise.

@hirefrank
Copy link
Copy Markdown

Thanks @equivalent and @aaronmoodie I added your suggestions to my script -- you can see them here:
hirefrank/pg-to-sqlite#1

Also @aaronmoodie 👋 it's been a long time since Etsy!

@aaronmoodie
Copy link
Copy Markdown

Oh! hey @hirefrank! 👋 I didn't realise that was you! Thanks for updating the script and for adding my suggestions.

@jvitoroc
Copy link
Copy Markdown

jvitoroc commented Sep 7, 2024

@hirefrank you forgot to remove pg_catalog occurrences

@hirefrank
Copy link
Copy Markdown

Thanks @jvitoroc! Pls review and rest: hirefrank/pg-to-sqlite#3

If this works, I'll merge!

@hirefrank
Copy link
Copy Markdown

Merged!

@krtschmr
Copy link
Copy Markdown

such a clusterfuck....

it's not straight forward, if you use enums, lots of timestamps etc.
the db dump is just not compatible.

solution: dump to raw data, and load from raw data.

add this to your ApplicationRecord or modify whatever you want


  def self.to_csv
    file_path = Rails.root.join('db', 'seeds', "#{self.name.underscore.pluralize}.csv")

    FileUtils.mkdir_p(File.dirname(file_path)) unless File.directory?(File.dirname(file_path))

    CSV.open(file_path, 'wb') do |csv|
      csv << column_names
      self.unscoped.each do |record|
        csv << record.attributes.values
      end
    end
    true
  end

  def self.from_csv
    file_path = Rails.root.join('db', 'seeds', "#{self.name.underscore.pluralize}.csv")

    raise "file no exista" unless File.exist?(file_path)
    rows = []
    CSV.foreach(file_path, headers: true) do |row|
      attributes = row.to_hash

      column_types = self.columns_hash

      column_types.each do |column_name, column_info|
        if column_info.type == :datetime || column_info.type == :timestamp
          if attributes[column_name]
            attributes[column_name] = Time.parse(attributes[column_name])
          end
        end
      end

      rows << attributes
    end

    if rows.any?
      self.insert_all(rows)
      puts "#{rows.size} records imported successfully!"
    end
  end


   def self.dump_all_models_to_csv
    descendants.each do |model|
      next unless model.descendants.empty?
      model.to_csv
      puts "Exported #{model.name} to CSV successfully."
    end
  end


  def self.import_all_models_from_csv
    seeds_path = Rails.root.join('db', 'seeds')
    csv_files = Dir.glob(seeds_path.join('*.csv'))

    csv_files.each do |file_path|
      file_name = File.basename(file_path, '.csv')
      model_name = file_name.singularize.camelize
      if Object.const_defined?(model_name)
        model = model_name.constantize
        model.unscoped.delete_all
        model.from_csv
        puts "Imported data from #{file_name}.csv successfully."
      else
        puts "No model found for #{file_name}.csv; skipping import."
      end
    end
  end

@arimendelow
Copy link
Copy Markdown

Specifically for use with https://github.com/redwoodjs/sdk (which uses Cloudflare D1) when migrating from https://github.com/redwoodjs/graphql/ (and using Postgres), I created the following script based on @hirefrank 's script:

https://gist.github.com/arimendelow/e60f8a1303dc00bd4a21e84142623dbc

@hirefrank
Copy link
Copy Markdown

@arimendelow
Copy link
Copy Markdown

oooo will take a look @hirefrank !!! i'll reply back on your PR :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment